799 resultados para Anaconda Reduction Works


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta dissertação teve como objetivo fundamental a otimização energética do sistema de refrigeração da máquina de impregnar tela ZELL e, como objetivo adicional, a avaliação da qualidade da água do circuito, justificada pela acentuada degradação dos rolos devido à corrosão provocada pela recirculação da água de arrefecimento. Inicialmente fez-se o levantamento de informações do processo produtivo para caracterizar o funcionamento do sistema de refrigeração, tendo-se selecionado duas telas de poliéster designadas neste estudo por P1 e P2 e, também, uma tela de nylon designada por N. Foram efetuados ensaios, um para cada tela, para a atual temperatura de setpoint da água à saída da torre de arrefecimento (30ºC). Realizou-se outro ensaio para a tela N mas com uma temperatura de setpoint de 37ºC, ao qual se chamou N37. Deste modo, determinou-se as potências térmicas removidas pela água de refrigeração e as potências térmicas perdidas por radiação e por convecção, tendo-se verificado que na generalidade dos rolos as referências P1 e P2 apresentam valores mais elevados. Em termos percentuais, a potência térmica removida pela água de refrigeração nos grupos tratores 1 e 3 e no conjunto de rolos de R1 a R29 corresponde a 48%, 10% e 70%, respetivamente. Com a avaliação às necessidades de arrefecimento da máquina ZELL, confirmou-se que os caudais atuais de refrigeração dos rolos garantem condições, mais que suficientes, de funcionamento dos rolamentos. Assim sendo, fez-se uma análise no sentido da diminuição do caudal total que passou de 10,25 L/s para 7,65 L/s. Considerando esta redução, determinou-se o caudal de ar húmido a ser introduzido na torre de arrefecimento. O valor determinado foi de 4,6 m3ar húmido/s, o que corresponde a uma redução de cerca de 32% em relação ao caudal atual que é de 6,8 m3ar húmido/s. Com os resultados das análises efetuadas à água do circuito de refrigeração, concluiu-se que a água de reposição e a água de recirculação possuem má qualidade para uso na generalidade dos sistemas de refrigeração, principalmente devido aos elevados valores de concentração de ferro e condutividade elétrica, responsáveis pela intensificação da corrosão no interior dos rolos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

International Journal of Engineering and Industrial Management, nº 1, p. 195-208

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most practicable assay for measurement of measles IgG (mIgG) in large numbers of sera is an enzyme immunoassay (EIA). To assess how EIA results would agree with those by the gold standard method of plaque reduction neutralization (PRN) we compared the results from the two methods in 43 pairs of maternal and umbilical cord sera, and sera from the corresponding infants when aged 11 - 14 months. In maternal-cord sera, the differences between mean antibody levels by EIA or PRN were not statistically significant, though in individual sera, differences could be large. However, agreement was less good for infants sera, in which levels of mIgG were very low. The conclusions of a study of transplacental transport of mIgG would not be affected by the use of either technique. When studying waning immunity in infants, PRN should be the method of choice, while results from studies using EIA should be interpreted with caution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pela importância que os edifícios têm na utilização de energia, a avaliação do seu desempenho energético é de grande relevância, uma vez que, em grande parte, passa por estes a concretização das metas europeias definidas para 2020, no que concerne à diminuição da utilização de energia. Tendo em conta que os edifícios representam 40% do consumo de energia total, e estando o sector em expansão, esta realidade obriga a uma procura de soluções integradas de arquitetura e engenharia que promovam a sustentabilidade dos edifícios. Foi efetuado um estudo num edifício constituído por dois corpos, um mais antigo que funciona como centro de dia e um mais recente que funciona como lar, localizados no concelho de Matosinhos, onde se identificaram os pontos de maior consumo energético, para os quais foram sugeridas alterações no sentido de baixar os custos com a factura energética. Nesta dissertação foi utilizado um software de simulação dinâmica para avaliação do comportamento térmico do edifício nas condições atuais e, posteriormente, foram simulados outros cenários com alterações ao nível da envolvente térmica dos edifícios e dos seus sistemas técnicos, que permitiram identificar algumas medidas de melhoria de eficiência energética. As medidas de melhoria sugeridas implicam uma redução energética, ao nível do consumo de água quente sanitária, consumo de gás natural e electricidade. De entre essas medidas, e com um payback inferior a 8 anos e meio, destacam-se a instalação de redutores de caudal, a substituição da caldeira e da bomba de recirculação, a instalação de painéis solares térmicos e a redução da quantidade de lâmpadas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evidence indicates that exposure to high levels of noise adversely affects human health, and these effects are dependent upon various factors. In hospitals, there are many sources of noise, and high levels exert an impact on patients and staff, increasing both recovery time and stress, respectively. The goal of this pilot study was to develop, implement and evaluate the effectiveness of a training program (TP) on noise reduction in a Neonatal Intensive Care Units (NICU) by comparing the noise levels before and after the implementation of the program. A total of 79 health professionals participated in the study. The measurements of sound pressure levels took into account the layout of the unit and location of the main sources of noise. General results indicated that LAeq levels before implementation of the training program were often excessive, ranging from 48.7 ± 2.94 dBA to 71.7 ± 4.74 dBA, exceeding international guidelines. Similarly following implementation of the training program noise levels remained unchanged (54.5 ± 0.49 dBA to 63.9 ± 4.37 dBA), despite a decrease in some locations. There was no significant difference before and after the implementation of TP. However a significant difference was found for Lp, Cpeak, before and after training staff, suggesting greater care by healthcare professionals performing their tasks. Even recognizing that a TP is quite important to change behaviors, this needs to be considered in a broader context to effectively control noise in the NICU.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted to obtain the phD degree in Biochemistry, specialty in Physical- Biochemistry, by the Faculdade de Ciências e Tecnologia from the Universidade Nova de Lisboa

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selenium modified ruthenium electrocatalysts supported on carbon black were synthesized using NaBH4 reduction of the metal precursor. Prepared Ru/C electrocatalysts showed high dispersion and very small averaged particle size. These Ru/C electrocatalysts were subsequently modified with Se following two procedures: (a) preformed Ru/carbon catalyst was mixed with SeO2 in xylene and reduced in H2 and (b) Ru metal precursor was mixed with SeO2 followed by reduction with NaBH4. The XRD patterns indicate that a pyrite-type structure was obtained at higher annealing temperatures, regardless of the Ru:Se molar ratio used in the preparation step. A pyrite-type structure also emerged in samples that were not calcined; however, in this case, the pyrite-type structure was only prominent for samples with higher Ru:Se ratios. The characterization of the RuSe/C electrocatalysts suggested that the Se in noncalcined samples was present mainly as an amorphous skin. Preliminary study of activity toward oxygen reduction reaction (ORR) using electrocatalysts with a Ru:Se ratio of 1:0.7 indicated that annealing after modification with Se had a detrimental effect on their activity. This result could be related to the increased particle size of crystalline RuSe2 in heat-treated samples. Higher activity of not annealed RuSe/C catalysts could also be a result of the structure containing amorphous Se skin on the Ru crystal. The electrode obtained using not calcined RuSe showed a very promising performance with a slightly lower activity and higher overpotential in comparison with a commercial Pt/C electrode. Single wall carbon nanohorns (SWNH) were considered for application as ORR electrocatalysts' supports. The characterization of SWNH was carried out regarding their tolerance toward strong catalyzed corrosion conditions. Tests indicated that SWNH have a three times higher electrochemical surface area (ESA) loss than carbon black or Pt commercial electrodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado em Gestão Integrada da Qualidade, Ambiente e Segurança

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Decreased responses to hepatitis B vaccine have been associated with some host conditions including obesity. Susceptible non-responders to a primary three-dose vaccine series should be revaccinated. Those who maintain a non-responder condition after revaccination with three vaccine doses are unlikely to develop protection using more doses. This is a description of an obese woman who received six doses of hepatitis B vaccine and persisted as a non-responder. She was submitted to a vertical banded gastroplasty Roux-en-Y gastric bypass Capellas's technique. After weight reduction, she received three additional doses of vaccine and seroconverted. Further studies should help clarify the need to evaluate antibody levels and eventually revaccinate the increasing population of individuals who undergo weight reduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: A single bout of aerobic exercise acutely decreases blood pressure, even in older adults with hypertension. Nonetheless, blood pressure responses to aerobic exercise in very old adults with hypertension have not yet been documented. Therefore, this study aimed to assess the effect of a single session of aerobic exercise on postexercise blood pressure in very old adults with hypertension. METHODS: Eighteen older adults with essential hypertension were randomized into exercise (N = 9, age: 83.4 ± 3.2 years old) or control (N = 9, age: 82.7 ± 2.5 years old) groups. The exercise group performed a session of aerobic exercise constituting 2 periods of 10 minutes of walking at an intensity of 40% to 60% of the heart rate reserve. The control group rested for the same period of time. Anthropometric variables and medication status were evaluated at baseline. Heart rate and systolic and diastolic blood pressures were measured at baseline, after exercise, and at 20 and 40 minutes postexercise. RESULTS: Systolic blood pressure showed a significant interaction for group × time (F3,24 = 6.698; P = .002; ηp = 0.153). In the exercise group, the systolic blood pressure at 20 (127.3 ± 20.9 mm Hg) and 40 minutes (123.7 ± 21.0 mm Hg) postexercise was significantly lower in comparison with baseline (135.6 ± 20.6 mm Hg). Diastolic blood pressure did not change. Heart rate was significantly higher after the exercise session. In the control group, no significant differences were observed. CONCLUSIONS: A single session of aerobic exercise acutely reduces blood pressure in very old adults with hypertension and may be considered an important nonpharmacological strategy to control hypertension in this age group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

No âmbito da unidade curricular Tese/Dissertação do 2ºano do Mestrado em Engenharia Eletrotécnica – Ramo Sistemas e Planeamento Industrial do Instituto Superior de Engenharia do Porto, o presente trabalho descreve o estágio curricular efetuado num projeto industrial de melhoria em parceria com o Kaizen Institute, uma empresa de consultoria operacional. Este projeto foi desenvolvido numa empresa de produção e redistribuição de artigos de papelaria e escritório, a Firmo AVS – Papeis e Papelaria,S.A.. O acordo efetuado entre o Kaizen Institute e a Firmo AVS foi o de promover e incutir a cultura da melhoria continua e da mudança de atitudes e comportamentos por parte dos colaboradores da Firmo, sendo que numa fase inicial o foco do projeto foi o departamento de produção de envelopes, designada por área piloto, expandindo-se posteriormente a metodologia Kaizen aos restantes departamentos. A realização deste projeto teve como objetivo a implementação de conceitos elementares de melhoria continua nomeadamente alguns pilares ou ferramentas do Total Flow Management (TFM) e do Kaizen Management System (KMS) na empresa Firmo, de forma a reduzir ou eliminar desperdícios, incremento do envolvimento dos colaboradores, melhoria da comunicação e trabalho em equipa, estandardização de processos produtivos, criação de normas de trabalho, utilização de ferramentas SMED para a redução de tempos improdutivos e aumento da produtividade. Várias foram as dificuldades presentes no terreno para a implementação destes objetivos mas com as diversas ferramentas e workshops realizados na organização, conseguiu-se o envolvimento de todos os colaboradores da organização e a obtenção de resultados satisfatórios nomeadamente ao nível da comunicação e trabalho em equipa, organização e limpeza dos postos de trabalho, standard work (normalização do trabalho), diminuição do lead time nos processos produtivos e consequente aumento de produtividade.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The majority of infections caused by R. equi occur in hosts with some degree of cell-mediated immunodeficiency. Immunocompetent individuals are infrequently affected and usually present with localized disease. Infections of the skin or related structures are uncommon and are usually related to environmental contamination. The microbiology laboratory plays a key role in the identification of the organism since it may be mistaken for common skin flora. We describe a 31 year-old woman without medical problems who presented nine weeks after breast reduction with right breast cellulitis and purulent drainage from the surgical wound. She underwent incision and drainage, and cultures of the wound yielded Rhodococcus equi. The patient completed six weeks of antimicrobial therapy with moxifloxacin and rifampin with complete resolution.