15 resultados para extraction and separation techniques
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Supercritical fluid extraction (SEE) of the volatile oil from Thymus vulgaris L. aerial flowering parts was performed under different conditions of pressure, temperature, mean particle size and CO2 flow rate and the correspondent yield and composition were compared with those of the essential oil isolated by hydrodistillation (HD). Both the oils were analyzed by GC and GC-MS and 52 components were identified. The main volatile components obtained were p-cymene (10.0-42.6% for SFE and 28.9-34.8% for HD), gamma-terpinene (0.8-6.9% for SFE and 5.1-7.0% for HD), linalool (2.3-5.3% for SFE and 2.8-3.1% for HD), thymol (19.5-40.8% for SFE and 35.4-41.6% for HD), and carvacrol (1.4-3.1% for SFE and 2.6-3.1% for HD). The main difference was found to be the relative percentage of thymoquinone (not found in the essential oil) and carvacryl methyl ether (1.0-1.2% for HD versus t-0.4 for SFE) which can explain the higher antioxidant activity, assessed by Rancimat test, of the SFE volatiles when compared with HD. Thymoquinone is considered a strong antioxidant compound.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde.
Resumo:
Throughout the world, epidemiological studies were established to examine the relationship between air pollution and mortality rates and adverse respiratory health effects. However, despite the years of discussion the correlation between adverse health effects and atmospheric pollution remains controversial, partly because these studies are frequently restricted to small and well-monitored areas. Monitoring air pollution is complex due to the large spatial and temporal variations of pollution phenomena, the high costs of recording instruments, and the low sampling density of a purely instrumental approach. Therefore, together with the traditional instrumental monitoring, bioindication techniques allow for the mapping of pollution effects over wide areas with a high sampling density. In this study, instrumental and biomonitoring techniques were integrated to support an epidemiological study that will be developed in an industrial area located in Gijon in the coastal of central Asturias, Spain. Three main objectives were proposed to (i) analyze temporal patterns of PM10 concentrations in order to apportion emissions sources, (ii) investigate spatial patterns of lichen conductivity to identify the impact of the studied industrial area in air quality, and (iii) establish relationships amongst lichen conductivity with some site-specific characteristics. Samples of the epiphytic lichen Parmelia sulcata were transplanted in a grid of 18 by 20 km with an industrial area in the center. Lichens were exposed for a 5-mo period starting in April 2010. After exposure, lichen samples were soaked in 18-MΩ water aimed at determination of water electrical conductivity and, consequently, lichen vitality and cell damage. A marked decreasing gradient of lichens conductivity relative to distance from the emitting sources was observed. Transplants from a sampling site proximal to the industrial area reached values 10-fold higher than levels far from it. This finding showed that lichens reacted physiologically in the polluted industrial area as evidenced by increased conductivity correlated to contamination level. The integration of temporal PM10 measurements and analysis of wind direction corroborated the importance of this industrialized region for air quality measurements and identified the relevance of traffic for the urban area.
Resumo:
The amount of fat is a component that complicates the clinical evaluation and the differential diagnostic between benign and malign lesions in the breast MRI examinations. To overcome this problem, an effective erasing of the fat signal over the images acquisition process, is essentials. This study aims to compare three fat suppression techniques (STIR, SPIR, SPAIR) in the MR images of the breast and to evaluate the best image quality regarding its clinical usefulness. To mimic breast women, a breast phantom was constructed. First the exterior contour and, in second time, its content which was selected based on 7 samples with different components. Finally it was undergone to a MRI breast protocol with the three different fat saturation techniques. The examinations were performed on a 1.5 T MRI system (Philips®). A group of 5 experts evaluated 9 sequences, 3 of each with fat suppression techniques, in which the frequency offset and TI (Inversion Time) were the variables changed. This qualitative image analysis was performed according 4 parameters (saturation uniformity, saturation efficacy, detail of the anatomical structures and differentiation between the fibroglandular and adipose tissue), using a five-point Likert scale. The statistics analysis showed that anyone of the fat suppression techniques demonstrated significant differences compared to the others with (p > 0.05) and regarding each parameter independently. By Fleiss’ kappa coefficient there was a good agreement among observers P(e) = 0.68. When comparing STIR, SPIR and SPAIR techniques it was confirmed that all of them have advantages in the study of the breast MRI. For the studied parameters, the results through the Friedman Test showed that there are similar advantages applying anyone of these techniques.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
The present work involves the use of p-tert-butylcalix[4,6,8]arene carboxylic acid derivatives ((t)Butyl[4,6,8]CH2COOH) for selective extraction of hemoglobin. All three calixarenes extracted hemoglobin into the organic phase, exhibiting extraction parameters higher than 0.90. Evaluation of the solvent accessible positively charged amino acid side chains of hemoglobin (PDB entry 1XZ2) revealed that there are 8 arginine, 44 lysine and 30 histidine residues on the protein surface which may be involved in the interactions with the calixarene molecules. The hemoglobin-(t)Butyl[6]CH2COOH complex had pseudoperoxidase activity which catalysed the oxidation of syringaldazine in the presence of hydrogen peroxide in organic medium containing chloroform. The effect of pH, protein and substrate concentrations on biocatalysis was investigated using the hemoglobin-(t)Butyl[6]CH2COOH complex. This complex exhibited the highest specific activity of 9.92 x 10(-2) U mg protein(-1) at an initial pH of 7.5 in organic medium. Apparent kinetic parameters (V'(max), K'(m), k'(cat) and k'(cat)/K'(m)) for the pseudoperoxidase activity were determined in organic media for different pH values from a Michaelis-Menten plot. Furthermore, the stability of the protein-calixarene complex was investigated for different initial pH values and half-life (t(1/2)) values were obtained in the range of 1.96 and 2.64 days. Hemoglobin-calixarene complex present in organic medium was recovered in fresh aqueous solutions at alkaline pH, with a recovery of pseudoperoxidase activity of over 100%. These results strongly suggest that the use of calixarene derivatives is an alternative technique for protein extraction and solubilisation in organic media for biocatalysis.
Resumo:
An overview of the studies carried out in our laboratories on supercritical fluid extraction (SFE) of volatile oils from seven aromatic plants: pennyroyal (Mentha pulegium L.), fennel seeds (Foeniculum vulgare Mill.), coriander (Coriandrum sativum L.), savory (Satureja fruticosa Beguinot), winter savory (Satureja montana L.), cotton lavender (Santolina chamaecyparisus) and thyme (Thymus vulgaris), is presented. A flow apparatus with a 1 L extractor and two 0.27 L separators was built to perform studies at temperatures ranging from 298 to 353 K and pressures up to 30.0 MPa. The best compromise between yield and composition compared with hydrodistillation (HD) was achieved selecting the optimum experimental conditions of extraction and fractionation. The major differences between HD and SFE oils is the presence of a small percentage of cuticular waxes and the relative amount of thymoquinone, an oxygenated monoterpene with important biological properties, which is present in the oils from thyme and winter savory. On the other hand, the modeling of our data on supercritical extraction of volatile oil from pennyroyal is discussed using Sovova's models. These models have been applied successfully to the other volatile oil extractions. Furthermore, other experimental studies involving supercritical CO2 carried out in our laboratories are also mentioned.
Resumo:
Seismic recordings of IRIS/IDA/GSN station CMLA and of several temporary stations in the Azores archipelago are processed with P and S receiver function (PRF and SRF) techniques. Contrary to regional seismic tomography these methods provide estimates of the absolute velocities and of the Vp/Vs ratio up to a depth of similar to 300 km. Joint inversion of PRFs and SRFs for a few data sets consistently reveals a division of the subsurface medium into four zones with a distinctly different Vp/Vs ratio: the crust similar to 20 km thick with a ratio of similar to 1.9 in the lower crust, the high-Vs mantle lid with a strongly reduced VpNs velocity ratio relative to the standard 1.8, the low-velocity zone (LVZ) with a velocity ratio of similar to 2.0, and the underlying upper-mantle layer with a standard velocity ratio. Our estimates of crustal thickness greatly exceed previous estimates (similar to 10 km). The base of the high-Vs lid (the Gutenberg discontinuity) is at a depth of-SO km. The LVZ with a reduction of S velocity of similar to 15% relative to the standard (IASP91) model is terminated at a depth of similar to 200 km. The average thickness of the mantle transition zone (TZ) is evaluated from the time difference between the S410p and SKS660p, seismic phases that are robustly detected in the S and SKS receiver functions. This thickness is practically similar to the standard IASP91 value of 250 km. and is characteristic of a large region of the North Atlantic outside the Azores plateau. Our data are indicative of a reduction of the S-wave velocity of several percent relative to the standard velocity in a depth interval from 460 to 500 km. This reduction is found in the nearest vicinities of the Azores, in the region sampled by the PRFs, but, as evidenced by SRFs, it is missing at a distance of a few hundred kilometers from the islands. We speculate that this anomaly may correspond to the source of a plume which generated the Azores hotspot. Previously, a low S velocity in this depth range was found with SRF techniques beneath a few other hotspots.
Resumo:
We investigate nematic wetting and filling transitions of crenellated surfaces (rectangular gratings) by numerical minimization of the Landau-de Gennes free energy as a function of the anchoring strength, for a wide range of the surface geometrical parameters: depth, width, and separation of the crenels. We have found a rich phase behavior that depends in detail on the combination of the surface parameters. By comparison to simple fluids, which undergo a continuous filling or unbending transition, where the surface changes from a dry to a filled state, followed by a wetting or unbinding transition, where the thickness of the adsorbed fluid becomes macroscopic and the interface unbinds from the surface, nematics at crenellated surfaces reveal an intriguingly rich behavior: in shallow crenels only wetting is observed, while in deep crenels, only filling transitions occur; for intermediate surface geometrical parameters, a new class of filled states is found, characterized by bent isotropic-nematic interfaces, which persist for surfaces structured on large scales, compared to the nematic correlation length. The global phase diagram displays two wet and four filled states, all separated by first-order transitions. For crenels in the intermediate regime re-entrant filling transitions driven by the anchoring strength are observed.
Resumo:
This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.
Resumo:
The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.
Resumo:
Medical imaging is a powerful diagnostic tool. Consequently, the number of medical images taken has increased vastly over the past few decades. The most common medical imaging techniques use X-radiation as the primary investigative tool. The main limitation of using X-radiation is associated with the risk of developing cancers. Alongside this, technology has advanced and more centres now use CT scanners; these can incur significant radiation burdens compared with traditional X-ray imaging systems. The net effect is that the population radiation burden is rising steadily. Risk arising from X-radiation for diagnostic medical purposes needs minimising and one way to achieve this is through reducing radiation dose whilst optimising image quality. All ages are affected by risk from X-radiation however the increasing population age highlights the elderly as a new group that may require consideration. Of greatest concern are paediatric patients: firstly they are more sensitive to radiation; secondly their younger age means that the potential detriment to this group is greater. Containment of radiation exposure falls to a number of professionals within medical fields, from those who request imaging to those who produce the image. These staff are supported in their radiation protection role by engineers, physicists and technicians. It is important to realise that radiation protection is currently a major European focus of interest and minimum competence levels in radiation protection for radiographers have been defined through the integrated activities of the EU consortium called MEDRAPET. The outcomes of this project have been used by the European Federation of Radiographer Societies to describe the European Qualifications Framework levels for radiographers in radiation protection. Though variations exist between European countries radiographers and nuclear medicine technologists are normally the professional groups who are responsible for exposing screening populations and patients to X-radiation. As part of their training they learn fundamental principles of radiation protection and theoretical and practical approaches to dose minimisation. However dose minimisation is complex – it is not simply about reducing X-radiation without taking into account major contextual factors. These factors relate to the real world of clinical imaging and include the need to measure clinical image quality and lesion visibility when applying X-radiation dose reduction strategies. This requires the use of validated psychological and physics techniques to measure clinical image quality and lesion perceptibility.
Resumo:
Dissertação de natureza Científica para obtenção do grau de Mestre em Engenharia Civil
Resumo:
The behavior of copper(II) complexes of pentane-2,4-dione and 1,1,1,5,5,5-hexafluoro-2,4-pentanedione, [Cu(acac)(2) (1) and [Cu(HFacac)(2)(H2O)] (2), in ionic liquids and molecular organic solvents, was studied by spectroscopic and electrochemical techniques. The electron paramagnetic resonance characterization (EPR) showed well-resolved spectra in most solvents. In general the EPR spectra of [Cu(acac)(2)] show higher g(z) values and lower hyperfine coupling constants, A(z), in ionic liquids than in organic solvents, in agreement with longer Cu-O bond lengths and higher electron charge in the copper ion in the ionic liquids, suggesting coordination of the ionic liquid anions. For [Cu(HFacac)(2)(H2O)] the opposite was observed suggesting that in ionic liquids there is no coordination of the anions and that the complex is tetrahedrically distorted. The redox properties of the Cu(II) complexes were investigated by cyclic voltammetry (CV) at a Pt electrode (d = 1 mm), in bmimBF(4) and bmimNTf(2) ionic liquids and, for comparative purposes, in neat organic solvents. The neutral copper(II) complexes undergo irreversible reductions to Cu(I) and Cu(0) species in both ILs and common organic solvents (CH2Cl2 or acetonitrile), but, in ILs, they are usually more easier to reduce (less cathodic reduction potential) than in the organic solvents. Moreover, 1 and 2 are easier to reduce in bmimNTf(2) than in bmimBF(4) ionic liquid. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The behavior of two cationic copper complexes of acetylacetonate and 2,2'-bipyridine or 1,10-phenanthroline, [Cu(acac)(bipy)]Cl (1) and [Cu(acac)(phen)]Cl (2), in organic solvents and ionic liquids, was studied by spectroscopic and electrochemical techniques. Both complexes showed solvatochromism in ionic liquids although no correlation with solvent parameters could be obtained. By EPR spectroscopy rhombic spectra with well-resolved superhyperfine structure were obtained in most ionic liquids. The spin Hamiltonian parameters suggest a square pyramidal geometry with coordination of the ionic liquid anion. The redox properties of the complexes were investigated by cyclic voltammetry at a Pt electrode (d = 1 mm) in bmimBF(4) and bmimNTf(2) ionic liquids. Both complexes 1 and 2 are electrochemically reduced in these ionic media at more negative potentials than when using organic solvents. This is in agreement with the EPR characterization, which shows lower A(z) and higher g(z) values for the complexes dissolved in ionic liquids, than in organic solvents, due to higher electron density at the copper center. The anion basicity order obtained by EPR is NTf2-, N(CN)(2)(-), MeSO4- and Me2PO4-, which agrees with previous determinations. (C) 2013 Elsevier B.V. All rights reserved.