36 resultados para Erigena, Johannes Scotus, approximately 810-approximately 877.
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Genomic damage is probably the most important fundamental cause of development and degenerative disease. It is also well established that genomic damage is produced by environmental exposure to genotoxins, medical procedures (e.g. radiation and chemicals), micronutrient deficiency (e.g. folate), lifestyle factors (e.g. alcohol, smoking, drugs and stress), and genetic factors such as inherited defects in DNA metabolism and/or repair. Tobacco smoke has been associated to a higher risk of development of cancer, especially in the oral cavity, larynx and lungs, as these are places of direct contact with many carcinogenic tobacco’s compounds. Alcohol is definitely a recognized agent that influence cells in a genotoxic form, been citied as a strong agent with potential in the development of carcinogenic lesions. Epidemiological evidence points to a strong synergistic effect between cigarette smoking and alcohol consumption in the induction of cancers in the oral cavity. Approximately 90% of human cancers originate from epithelial cells. Therefore, it could be argued that oral epithelial cells represent a preferred target site for early genotoxic events induced by carcinogenic agents entering the body via inhalation and ingestion. The MN assay in buccal cells was also used to study cancerous and precancerous lesions and to monitor the effects of a number of chemopreventive agents.
Resumo:
Formaldehyde (FA) is a colour less gas widely used in the industry and hospitals as an aqueous solution, formalin. It is extremely reactive and induces various genotoxic effects in proliferating cultured mammalian cells. Tobacco smoke has been epidemiologically associated to a higher risk of development of cancer, especially in the oral cavity, larynx and lungs, as these are places of direct contact with many carcinogenic tobacco’s compounds. Approximately 90% of human cancers originate from epithelial cells. Therefore, it could be argued that oral epithelial cells represent a preferred target site for early genotoxic events induced by carcinogenic agents entering the body via inhalation and ingestion. The cytokinesis-blocked micronucleus assay (CBMN) in human lymphocytes is one of the most commonly used methods for measuring DNA damage, namely the detection of micronucleus, nucleoplasmic bridges, and nuclear buds.
Resumo:
Occupational exposure to formaldehyde (FA) has been shown to induce nasopharyngeal cancer and has been classified as carcinogenic to humans (group 1) on the basis of sufficient evidence in humans. Tobacco smoke has been associated to a higher risk of development of cancer, especially in the oral cavity, larynx and lungs, as these are places of direct contact with many carcinogenic tobacco’s compounds. Alcohol is a recognized agent that influence cells in a genotoxic form, been citied as a strong agent with potential in the development of carcinogenic lesions. Epidemiological evidence points to a strong synergistic effect between cigarette smoking and alcohol consumption in the induction of cancers in the oral cavity. Approximately 90% of human cancers originate from epithelial cells. Therefore, it could be argued that oral epithelial cells represent a preferred target site for early genotoxic events induced by carcinogenic agents entering the body via inhalation and ingestion. The MN assay in buccal cells was also used to study cancerous and precancerous lesions and to monitor the effects of a number of chemopreventive agents.
Resumo:
A presente tese teve por base a identificação e resolução de um problema existente no tratamento de efluentes provenientes dos processos de tratamento de superfícies por galvanoplastia, na OGMA – Indústria Aeronáutica de Portugal S.A.. Observou-se a ocorrência, esporádica, de crómio hexavalente, (Cr (VI)), em valor superior ao valor limite de emissão (VLE). Os resultados foram monitorizados e os dados recolhidos no decorrer da actividade de tratamento de efluentes, durante o período de, aproximadamente, 5 anos (2006 a 2011). A recolha de resultados decorreu no âmbito da actividade profissional da mestranda, que, para além da responsabilidade técnica dos processos de galvanoplastia na empresa, é também responsável pelo suporte técnico ao processo de tratamento de efluentes resultantes da actividade de tratamento de superficies por processos de galvanoplastia. A empresa OGMA – Indústria Aeronáutica de Portugal S.A., é uma empresa de actividade aeronáutica dedicada à Fabricação e Manutenção de aeronaves, nomeadamente a prestação de serviços de Manutenção, Revisão e Modernização de, Aeronaves, Motores e Componentes, bem como Fabricação e Montagem de Aeroestruturas. Integrada na OGMA, S.A. encontra-se a área de tratamentos electroquímicos, onde são realizados processos de tratamento de materiais metálicos por electrodeposição, deposição química e conversão química. Desta actividade resulta uma quantidade considerável de efluentes líquidos que necessitam de tratamento adequado previamente à sua descarga em cursos de água. Devido ao tipo de contaminantes que estes efluentes possuem, o tratamento dos mesmos é realizado em várias etapas, passando pela oxidação de cianetos, a redução de cromatos e a neutralização. Posteriormente segue-se uma sedimentação e a remoção de lamas. De modo a garantir um controlo dos parâmetros de descarga dos efluentes tratados, de acordo com a legislação ambiental em vigor, o efluente obtido é analisado periodicamente em laboratório acreditado. Na perspectiva de solucionar o problema em questão, procedeu-se à realização de ensaios experimentais utilizando os efluentes provenientes dos tanques de reacção da redução de cromatos e da oxidação de cianetos da linha com cádmio, com especial incidência na variação dos intervalos de pH recomendados para cada uma das fases do tratamento de efluentes, e observação do comportamento das misturas em termos de presença de Cr (VI), quando sujeitos a variações de pH. Após análise dos dados disponíveis e realização de todos os ensaios, conclui-se que, o processo de oxidação de cianeto da linha com cádmio e o processo de redução de cromatos na mesma linha estão a funcionar adequadamente. Concluiu-se que o reaparecimento de Cr (VI) ocorre devido à existência de hipoclorito de sódio, em excesso, no tanque de oxidação de cianeto que, quando passa para o tanque de neutralização e entra em contacto com o efluente proveniente do tanque de redução de cromatos, oxida parte do crómio trivalente, (Cr (III)), existente, a Cr (VI). Para impedir a ocorrência deste fenómeno separou-se todo o efluente contendo crómio que passou a ser tratado na linha de tratamento de efluentes isenta de cádmio, não entrando assim em contacto com o efluente que contém hipoclorito não reagido, evitando a oxidação do Cr (III) a Cr (VI).
Resumo:
Reclaimed water from small wastewater treatment facilities in the rural areas of the Beira Interior region (Portugal) may constitute an alternative water source for aquifer recharge. A 21-month monitoring period in a constructed wetland treatment system has shown that 21,500 m(3) year(-1) of treated wastewater (reclaimed water) could be used for aquifer recharge. A GIS-based multi-criteria analysis was performed, combining ten thematic maps and economic, environmental and technical criteria, in order to produce a suitability map for the location of sites for reclaimed water infiltration. The areas chosen for aquifer recharge with infiltration basins are mainly composed of anthrosol with more than 1 m deep and fine sand texture, which allows an average infiltration velocity of up to 1 m d(-1). These characteristics will provide a final polishing treatment of the reclaimed water after infiltration (soil aquifer treatment (SAT)), suitable for the removal of the residual load (trace organics, nutrients, heavy metals and pathogens). The risk of groundwater contamination is low since the water table in the anthrosol areas ranges from 10 m to 50 m. Oil the other hand, these depths allow a guaranteed unsaturated area suitable for SAT. An area of 13,944 ha was selected for study, but only 1607 ha are suitable for reclaimed water infiltration. Approximately 1280 m(2) were considered enough to set up 4 infiltration basins to work in flooding and drying cycles.
Resumo:
A DC-DC step-up micro power converter for solar energy harvesting applications is presented. The circuit is based on a switched-capacitorvoltage tripler architecture with MOSFET capacitors, which results in an, area approximately eight times smaller than using MiM capacitors for the 0.131mu m CMOS technology. In order to compensate for the loss of efficiency, due to the larger parasitic capacitances, a charge reutilization scheme is employed. The circuit is self-clocked, using a phase controller designed specifically to work with an amorphous silicon solar cell, in order to obtain themaximum available power from the cell. This will be done by tracking its maximum power point (MPPT) using the fractional open circuit voltage method. Electrical simulations of the circuit, together with an equivalent electrical model of an amorphous silicon solar cell, show that the circuit can deliver apower of 1132 mu W to the load, corresponding to a maximum efficiency of 66.81%.
Resumo:
The assessment of surface water nanofiltration (NF) for the removal of endocrine disruptors (EDs) Nonylphenol Ethoxylate (IGEPAL), 4-Nonylphenol (NP) and 4-Octylphenol (OP) was carried out with three commercial NF membranes - NF90, NF200, NF270. The permeation experiments were conducted in laboratory flat-cell units of 13.2 x 10(-4) m(2) of surface area and in a DSS Lab-unit M20 with a membrane surface area of 0.036 m2. The membranes hydraulic permeabilities ranged from 3.7 to 15.6 kg/h/m(2)/bar and the rejection coefficients to NaCl, Na2SO4 and Glucose are for NF90: 97%, 99% and 97%, respectively; for NF200: 66%, 98% and 90%, respectively and for NF270: 48%, 94% and 84%, respectively. Three sets of nanofiltration experiments were carried out: i) NF of aqueous model solutions of NP, IGEPAL and OP running in total recirculation mode; ii) NF of surface water from Rio Sado (Settibal, Portugal) running in concentration mode; iii) NF of surface water from Rio Sado inoculated with NP, IGEPAL and OP running in concentration mode. The results of model solutions experiments showed that the EDs rejection coefficients are approximately 100% for all the membranes. The results obtained for the surface water showed that the rejection coefficients to natural organic Matter (NOM) are 94%, 82% and 78% for NF90, NF200 and NF 270 membranes respectively, with and without inoculation of EDs. The rejection coefficients to EDs in surface water with and without inoculation of EDs are 100%, showing that there is a fraction of NOM of high molecular weight that retains the EDs in the concentrate and that there is a fraction of NOM of low molecular weight that permeates through the NF membranes free of EDs.
Resumo:
Recent epidemiologic studies clearly outline the link between fungal sensibilization and exarcebations of asthma, leading to increased morbidity and mortality. Amongst the filamentous fungi, Aspergillus scpecies have been strongly linked with exarcebations of asthma and other respiratory allergic diseases. Particles of approximately 1 to 4 pm are deposited in the lower respiratory tract. Therefore, conidia of A. fumigatus are small enough to traverse the terminal respiratory airways and reach the pulmonary alveoli, whereas the larger conidia of some other Aspergillus species, such as A. flavus and A. niger, tend to be deposited in the paranasal sinuses and upper airways. Exposute to environmental fungal spores has been associated with worsening asthma symptoms, lung function, hospital admissions and asthma-related deaths.
Resumo:
Mestrado em Auditoria
Resumo:
The analysis of the Higgs boson data by the ATLAS and CMS Collaborations appears to exhibit an excess of h -> gamma gamma events above the Standard Model (SM) expectations, whereas no significant excess is observed in h -> ZZ* -> four lepton events, albeit with large statistical uncertainty due to the small data sample. These results (assuming they persist with further data) could be explained by a pair of nearly mass-degenerate scalars, one of which is an SM-like Higgs boson and the other is a scalar with suppressed couplings to W+W- and ZZ. In the two-Higgs-doublet model, the observed gamma gamma and ZZ* -> four lepton data can be reproduced by an approximately degenerate CP-even (h) and CP-odd (A) Higgs boson for values of sin (beta - alpha) near unity and 0: 70 less than or similar to tan beta less than or similar to 1. An enhanced gamma gamma signal can also arise in cases where m(h) similar or equal to m(H), m(H) similar or equal to m(A), or m(h) similar or equal to m(H) similar or equal to m(A). Since the ZZ* -> 4 leptons signal derives primarily from an SM-like Higgs boson whereas the gamma gamma signal receives contributions from two (or more) nearly mass-degenerate states, one would expect a slightly different invariant mass peak in the ZZ* -> four lepton and gamma gamma channels. The phenomenological consequences of such models can be tested with additional Higgs data that will be collected at the LHC in the near future. DOI: 10.1103/PhysRevD.87.055009.
Resumo:
Following work on tantalum and chromium implanted flat M50 steel substrates, this work reports on the electrochemical behaviour of M50 steel implanted with tantalum and chromium and the effect of the angle of incidence. Proposed optimum doses for resistance to chloride attack were based on the interpretation of results obtained during long-term and accelerated electrochemical testing. After dose optimization from the corrosion viewpoint, substrates were implanted at different angles of incidence (15°, 30°, 45°, 60°, 75°, 90°) and their susceptibility to localized corrosion assessed using open-circuit measurements, step by step polarization and cyclic voltammetry at several scan rates (5–50 mV s-1). Results showed, for tantalum implanted samples, an ennoblement of the pitting potential of approximately 0.5 V for an angle of incidence of 90°. A retained dose of 5 × 1016 atoms cm-2 was found by depth profiling with Rutherford backscattering spectrometry. The retained dose decreases rapidly with angle of incidence. The breakdown potential varies roughly linearly with the angle of incidence up to 30° falling fast to reach -0.1 V (vs. a saturated calomel electrode (SCE)) for 15°. Chromium was found to behave differently. Maximum corrosion resistance was found for angles of 45°–60° according to current densities and breakdown potentials. Cr+ depth profiles ((p,γ) resonance broadening method), showed that retained doses up to an angle of 60° did not change much from the implanted dose at 90°, 2 × 1017 Cr atoms cm-2. The retained implantation dose for tantalum and chromium was found to follow a (cos θ)8/3 dependence where θ is the angle between the sample normal and the beam direction.
Resumo:
In this work, we present a neural network (NN) based method designed for 3D rigid-body registration of FMRI time series, which relies on a limited number of Fourier coefficients of the images to be aligned. These coefficients, which are comprised in a small cubic neighborhood located at the first octant of a 3D Fourier space (including the DC component), are then fed into six NN during the learning stage. Each NN yields the estimates of a registration parameter. The proposed method was assessed for 3D rigid-body transformations, using DC neighborhoods of different sizes. The mean absolute registration errors are of approximately 0.030 mm in translations and 0.030 deg in rotations, for the typical motion amplitudes encountered in FMRI studies. The construction of the training set and the learning stage are fast requiring, respectively, 90 s and 1 to 12 s, depending on the number of input and hidden units of the NN. We believe that NN-based approaches to the problem of FMRI registration can be of great interest in the future. For instance, NN relying on limited K-space data (possibly in navigation echoes) can be a valid solution to the problem of prospective (in frame) FMRI registration.
Resumo:
Gene expression of three antioxidant enzymes, Mn superoxide dismutase (MnSOD), Cu,Zn superoxide dismutase (Cu,ZnSOD), and glutathione reductase (GR) was investigated in stationary phase Saccharomyces cerevisiae during menadione-induced oxidative stress. Both GR and Cu,ZnSOD mRNA steady state levels increased, reaching a plateau at about 90 min exposure to menadione. GR mRNA induction was higher than that of Cu,ZnSOD (about 14-fold and 9-fold after 90 min, respectively). A different pattern of response was obtained for MnSOD mRNA, with a peak at about 15 min (about 8-fold higher) followed by a decrease to a plateau approximately 4-fold higher than the control value. However, these increased mRNA levels did not result in increased protein levels and activities of these enzymes. Furthermore, exposure to menadione decreased MnSOD activity to half its value, indicating that the enzyme is partially inactivated due to oxidative damage. Cu,ZnSOD protein levels were increased 2-fold, but MnSOD protein levels were unchanged after exposure to menadione in the presence of the proteolysis inhibitor phenylmethylsulfonyl fluoride. These results indicate that the rates of Cu,ZnSOD synthesis and proteolysis are increased, while the rates of MnSOD synthesis and proteolysis are unchanged by exposure to menadione. Also, the translational efficiency for both enzymes is probably decreased, since increases in protein levels when proteolysis is inhibited do not reflect the increases in mRNA levels. Our results indicate that oxidative stress modifies MnSOD, Cu,ZnSOD, and GR gene expression in a complex way, not only at the transcription level but also at the post-transcriptional, translational, and post-translational levels.
Resumo:
In the literature, concepts of “polyneuropathy”, “peripheral neuropathy” and “neuropathy” are often mistakenly used as synonyms. Polyneuropathy is a specific term that refers to a relatively homogenous process that affects multiple peripheral nerves. Most of these tend to present as symmetric polyneuropathies that first manifest in the distal portions of the affected nerves. Many of these distal symmetric polyneuropathies are due to toxic-metabolic causes such as alcohol abuse and diabetes mellitus. Other distal symmetric polyneuropathies may result from an overproduction of substances that result in nerve pathology such as is observed in anti-MAG neuropathy and monoclonal gammopathy of undetermined significance. Other “overproduction” disorders are hereditary such as noted in the Portuguese type of familial amyloid polyneuropathy (FAP). FAP is a manifestation of a group of hereditary amyloidoses; an autosomal dominant, multisystemic disorder wherein the mutant amyloid precursor, transthyretin, is produced in excess primarily by the liver. The liver accounts for approximately 98% of all transthyretin production. FAP is confirmed by detecting a transthyretin variant with a methionine for valine substitution at position 30 [TTR (Met30)]. Familial Amyloidotic Polyneuropathy (FAP) – Portuguese type was first described by a Portuguese neurologist, Corino de Andrade in 1939 and published in 1951. Most persons with this disorder are descended from Portuguese sailors who sired offspring in various locations, primarily in Sweden, Japan and Mallorca. Their descendants emigrated worldwide such that this disorder has been reported in other countries as well. More than 2000 symptomatic cases have been reported in Portugal. FAP progresses rapidly with an average time course from symptom onset to multi-organ involvement and death between ten and twenty years. Treatments directed at removing this aberrant protein such as plasmapheresis and immunoadsorption proved to be unsuccessful. Liver transplantation has been the only effective solution as evidenced by almost 2000 liver transplants performed worldwide. A therapy for FAP with a novel agent, “Tafamidis” has shown some promise in ongoing phase III clinical trials. It is well recognized that regular physical activity of moderate intensity has a positive effect on physical fitness as gauged by body composition, aerobic capacity, muscular strength and endurance and flexibility. Physical fitness has been reported to result in the reduction of symptoms and lesser impairment when performing activities of daily living. Exercise has been advocated as part of a comprehensive approach to the treatment of chronic diseases. Therefore, this chapter concludes with a discussion of the role of exercise training on FAP.