26 resultados para SYNCHROTRON RADIATION SOURCES


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The consumption of natural products has become a public health problem, since these medicinal teas are prepared using natural plants without an effective hygienic and sanitary control. The aim of this study was to assess the effects of gamma radiation, on the microbial burden of two medicinal plants: Melissa officinalis and Lippia citriodora. Dried samples of the two plants were irradiated at a Co-60 experimental equipment. The applied gamma radiation doses were 1, 3, and 5 kGy at a dose rate of 1.34 kGy/h. Non-irradiated samples followed all the experiments. Bacterial and fungal counts were assessed before and after irradiation by membrane filtration method. Challenging tests with Escherichia coli were performed in order to evaluate the disinfection efficiency of gamma radiation treatment. Characterization of M. officinalis and L. citriadora microbiota indicated an average bioburden value of 102CFU/g. The inactivation studies of the bacterial mesophilic population of both dried plants pointed out to a one log reduction of microbial load after irradiation at 5 kGy. Regarding the fungal population, the initial load of 30 CFU/g was only reduced by 0.5 log by an irradiation dose of 5 kGy. The dynamics with radiation doses of plants microbial population’s phenotypes indicated the prevalence of gram-positive rods for M. officinalis before and after irradiation, and the increase of the frequency of gram-negative rods with irradiation for L. citriadora. Among fungal population of both plants, Mucor, Neoscytalidium, Aspergillus and Alternaria were the most isolated genera. The results obtained in the challenging tests with E. coli on plants pointed out to an inactivation efficiency of 99.5% and 99.9% to a dose of 2 kGy, for M.officinalis and L. citriadora, respectively. The gamma radiation treatment can be a significant tool for the microbial control in medicinal plants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Medical imaging is a powerful diagnostic tool. Consequently, the number of medical images taken has increased vastly over the past few decades. The most common medical imaging techniques use X-radiation as the primary investigative tool. The main limitation of using X-radiation is associated with the risk of developing cancers. Alongside this, technology has advanced and more centres now use CT scanners; these can incur significant radiation burdens compared with traditional X-ray imaging systems. The net effect is that the population radiation burden is rising steadily. Risk arising from X-radiation for diagnostic medical purposes needs minimising and one way to achieve this is through reducing radiation dose whilst optimising image quality. All ages are affected by risk from X-radiation however the increasing population age highlights the elderly as a new group that may require consideration. Of greatest concern are paediatric patients: firstly they are more sensitive to radiation; secondly their younger age means that the potential detriment to this group is greater. Containment of radiation exposure falls to a number of professionals within medical fields, from those who request imaging to those who produce the image. These staff are supported in their radiation protection role by engineers, physicists and technicians. It is important to realise that radiation protection is currently a major European focus of interest and minimum competence levels in radiation protection for radiographers have been defined through the integrated activities of the EU consortium called MEDRAPET. The outcomes of this project have been used by the European Federation of Radiographer Societies to describe the European Qualifications Framework levels for radiographers in radiation protection. Though variations exist between European countries radiographers and nuclear medicine technologists are normally the professional groups who are responsible for exposing screening populations and patients to X-radiation. As part of their training they learn fundamental principles of radiation protection and theoretical and practical approaches to dose minimisation. However dose minimisation is complex – it is not simply about reducing X-radiation without taking into account major contextual factors. These factors relate to the real world of clinical imaging and include the need to measure clinical image quality and lesion visibility when applying X-radiation dose reduction strategies. This requires the use of validated psychological and physics techniques to measure clinical image quality and lesion perceptibility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução e Objetivos – Os campos eletromagnéticos estão presentes naturalmente no Universo. Há alguns anos, os valores referentes a campos eletromagnéticos eram relativamente constantes. Com o desenvolvimento da tecnologia, a exposição a novas fontes de radiação eletromagnética aumentou. Desta forma, é normal que a preocupação pública, principalmente sobre os potenciais riscos para a saúde provenientes dos campos eletromagnéticos, tenha aumentado. O objetivo deste trabalho foi conhecer e analisar a preocupação e a perceção dos indivíduos sobre a radiação eletromagnética, tendo por base que um dos principais fatores para a adoção de medidas de precaução é o modo como o risco é percecionado pelo indivíduo. Metodologia – Trata-se de um estudo do tipo descritivo, de natureza quantitativa. A amostra, composta por 320 indivíduos, é de natureza não probabilística de conveniência. Resultados – Verificou-se que os inquiridos se manifestam “pouco preocupados” relativamente à exposição aos campos eletromagnéticos, desconhecem as fontes emissoras de radiação eletromagnética presentes no seu quotidiano e não tomam precauções relativamente à exposição a campos eletromagnéticos. Conclusões – Os indivíduos mostram imaturidade conscienciosa em relação à problemática da radiação eletromagnética, em parte justificada pela ausência de mecanismos sensoriais que a permitam detetar. A aposta na educação e sensibilização pode garantir um futuro com melhor qualidade de vida. É importante reunir esforços de várias entidades (saúde, meios de comunicação social e educação). A escola, através das crianças e jovens, constitui um meio privilegiado para a transmissão de informação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper intends to evaluate the capacity of producing concrete with a pre-established performance (in terms of mechanical strength) incorporating recycled concrete aggregates (RCA) from different sources. To this purpose, rejected products from the precasting industry and concrete produced in laboratory were used. The appraisal of the self-replication capacity was made for three strength ranges: 15-25 MPa, 35-45 MPa and 65-75 MPa. The mixes produced tried to replicate the strength of the source concrete (SC) of the RA. Only total, (100%) replacement of coarse natural aggregates (CNA) by coarse recycled concrete aggregates (CRCA) was tested. The results show that, both in mechanical and durability terms, there were no significant differences between aggregates from controlled sources and those from precast rejects for the highest levels of the target strength. Furthermore, the performance losses resulting from the RA's incorporation are substantially reduced when used medium or high strength SC's. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Present study develops and implements a specific methodology for the assessment of health risks derived from occupational exposure of workers to ionizing radiation in the fertilizer manufacturing industry. Negative effects on the health of exposed workers are identified, according to the types and levels of exposure to which they are subject, namely an increase of the risk of cancer even with long term exposure to low level radiation. Ionizing radiation types, methods and measuring equipment are characterized. The methodology developed in a case study of a phosphate fertilizer industry is applied, assessing occupational exposure to ionizing radiation caused by external radiation and the inhalation of radioactive gases and dust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Patients scheduled for a magnetic resonance imaging (MRI) scan sometimes require screening for ferromagnetic Intra Orbital Foreign Bodies (IOFBs). To assess this, they are required to fill out a screening protocol questionnaire before their scan. If it is established that a patient is at high risk, radiographic imaging is necessary. This review examines literature to evaluate which imaging modality should be used to screen for IOFBs, considering that the eye is highly sensitive to ionising radiation and any dose should be minimised. Method: Several websites and books were searched for information, these were as follows: PubMed, Science Direct, Web of Knowledge and Google Scholar. The terms searched related to IOFB, Ionising radiation, Magnetic Resonance Imaging Safety, Image Quality, Effective Dose, Orbits and X-ray. Thirty five articles were found, several were rejected due to age or irrelevance; twenty eight were eventually accepted. Results: There are several imaging techniques that can be used. Some articles investigated the use of ultrasound for investigation of ferromagnetic IOFBs of the eye and others discussed using Computed Tomography (CT) and X-ray. Some gaps in the literature were identified, mainly that there are no articles which discuss the lowest effective dose while having adequate image quality for orbital imaging. Conclusion: X-ray is the best method to identify IOFBs. The only problem is that there is no research which highlights exposure factors that maintain sufficient image quality for viewing IOFBs and keep the effective dose to the eye As Low As Reasonably Achievable (ALARA).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on a simple method to obtain surface gratings using a Michelson interferometer and femtosecond laser radiation. In the optical setup used, two parallel laser beams are generated using a beam splitter and then focused using the same focusing lens. An interference pattern is created in the focal plane of the focusing lens, which can be used to pattern the surface of materials. The main advantage of this method is that the optical paths difference of the interfering beams is independent of the distance between the beams. As a result, the fringes period can be varied without a need for major realignment of the optical system and the time coincidence between the interfering beams can be easily monitored. The potential of the method was demonstrated by patterning surface gratings with different periods on titanium surfaces in air.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we present the first study on probabilistic tsunami hazard assessment for the Northeast (NE) Atlantic region related to earthquake sources. The methodology combines the probabilistic seismic hazard assessment, tsunami numerical modeling, and statistical approaches. We consider three main tsunamigenic areas, namely the Southwest Iberian Margin, the Gloria, and the Caribbean. For each tsunamigenic zone, we derive the annual recurrence rate for each magnitude range, from Mw 8.0 up to Mw 9.0, with a regular interval, using the Bayesian method, which incorporates seismic information from historical and instrumental catalogs. A numerical code, solving the shallow water equations, is employed to simulate the tsunami propagation and compute near shore wave heights. The probability of exceeding a specific tsunami hazard level during a given time period is calculated using the Poisson distribution. The results are presented in terms of the probability of exceedance of a given tsunami amplitude for 100- and 500-year return periods. The hazard level varies along the NE Atlantic coast, being maximum along the northern segment of the Morocco Atlantic coast, the southern Portuguese coast, and the Spanish coast of the Gulf of Cadiz. We find that the probability that a maximum wave height exceeds 1 m somewhere in the NE Atlantic region reaches 60 and 100 % for 100- and 500-year return periods, respectively. These probability values decrease, respectively, to about 15 and 50 % when considering the exceedance threshold of 5 m for the same return periods of 100 and 500 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Eletrotécnica Ramo de Energia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.