15 resultados para Jet Propulsion Laboratory (U.S.)

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes an experimental study on ash deposition during the co-firing of bituminous coal with pine sawdust and olive stones in a laboratory furnace. The main objective of this study was to relate the ash deposit rates with the type of biomass burned and its thermal percentage in the blend. The thermal percentage of biomass in the blend was varied between 10% and 50% for both sawdust and olive stones. For comparison purposes, tests have also been performed using only coal or only biomass. During the tests, deposits were collected with the aid of an air-cooled deposition probe placed far from the flame region, where the mean gas temperature was around 640 degrees C. A number of deposit samples were subsequently analyzed on a scanning electron microscope equipped with an energy dispersive X-ray detector. Results indicate that blending sawdust with coal decreases the deposition rate as compared with the firing of unblended coal due to both the sawdust low ash content and its low alkalis content. The co-firing of coal and sawdust yields deposits with high levels of silicon and aluminium which indicates the presence of ashes with high fusion temperature and, thus, with less capacity to adhere to the surfaces. In contrast, in the co-firing of coal with olive stones the deposition rate increases as compared with the firing of unblended coal and the deposits produced present high levels of potassium, which tend to increase their stickiness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use the first and second laws of thermodynamics to analyze the behavior of an ideal jet engine. Simple analytical expressions for the thermal efficiency, the overall efficiency, and the reduced thrust are derived. We show that the thermal efficiency depends only on the compression ratio r and on the velocity of the aircraft. The other two performance measures depend also on the ratio of the temperature at the turbine to the inlet temperature in the engine, T-3/T-i. An analysis of these expressions shows that it is not possible to choose an optimal set of values of r and T-3/T-i that maximize both the overall efficiency and thrust. We study how irreversibilities in the compressor and the turbine decrease the overall efficiency of jet engines and show that this effect is more pronounced for smaller T-3/T-i.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A crescente utilização do espaço subterrâneo, em particular nos meios urbanos densamente ocupados em superfície e em condições geotécnicas difíceis, tem exigido um crescente desenvolvimento das técnicas de reforço dos terrenos com o objetivo de mitigar riscos de danos nas edificações. Dentro das várias técnicas seguidas o jet-grouting tem vindo a assumir um papel relevante no tratamento de solos de fraca qualidade geotécnica, pela sua versatilidade no modo de aplicação, quer em superfície quer em subterrâneo. Com a realização deste trabalho pretende-se efetuar um levantamento das técnicas de tratamento de terrenos por jet-grouting e a sua aplicação à construção de túneis. Assim, numa primeira fase, descreve-se o que é o jet-grouting, o modo como é aplicada a técnica de jet-grouting, quais as suas características, quais as propriedades adquiridas pelo produto final, como se realiza o controlo de qualidade de todo o processo de execução. Também se faz referência aos equipamentos e materiais utilizados, tal como às suas vantagens e desvantagens. Posteriormente faz-se referência aos problemas decorrentes da construção de túneis, da forma como influenciam o meio onde são escavados e o porquê da necessidade de reforço, chegando-se ao modo como a técnica de jet-grouting tem vindo a ser utilizada na construção dos túneis, citando-se alguns casos práticos. Por fim resumem-se os pontos mais importantes da realização deste trabalho.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Water covers over 70% of the Earth's surface, and is vital for all known forms of life. But only 3% of the Earth's water is fresh water, and less than 0.3% of all freshwater is in rivers, lakes, reservoirs and the atmosphere. However, rivers and lakes are an important part of fresh surface water, amounting to about 89%. In this Master Thesis dissertation, the focus is on three types of water bodies – rivers, lakes and reservoirs, and their water quality issues in Asian countries. The surface water quality in a region is largely determined both by the natural processes such as climate or geographic conditions, and the anthropogenic influences such as industrial and agricultural activities or land use conversion. The quality of the water can be affected by pollutants discharge from a specific point through a sewer pipe and also by extensive drainage from agriculture/urban areas and within basin. Hence, water pollutant sources can be divided into two categories: Point source pollution and Non-point source (NPS) pollution. Seasonal variations in precipitation and surface run-off have a strong effect on river discharge and the concentration of pollutants in water bodies. For example, in the rainy season, heavy and persistent rain wash off the ground, the runoff flow increases and may contain various kinds of pollutants and, eventually, enters the water bodies. In some cases, especially in confined water bodies, the quality may be positive related with rainfall in the wet season, because this confined type of fresh water systems allows high dilution of pollutants, decreasing their possible impacts. During the dry season, the quality of water is largely related to industrialization and urbanization pollution. The aim of this study is to identify the most common water quality problems in Asian countries and to enumerate and analyze the methodologies used for assessment of water quality conditions of both rivers and confined water bodies (lakes and reservoirs). Based on the evaluation of a sample of 57 papers, dated between 2000 and 2012, it was found that over the past decade, the water quality of rivers, lakes, and reservoirs in developing countries is being degraded. Water pollution and destruction of aquatic ecosystems have caused massive damage to the functions and integrity of water resources. The most widespread NPS in Asian countries and those which have the greatest spatial impacts are urban runoff and agriculture. Locally, mine waste runoff and rice paddy are serious NPS problems. The most relevant point pollution sources are the effluents from factories, sewage treatment plant, and public or household facilities. It was found that the most used methodology was unquestionably the monitoring activity, used in 49 of analyzed studies, accounting for 86%. Sometimes, data from historical databases were used as well. It can be seen that taking samples from the water body and then carry on laboratory work (chemical analyses) is important because it can give an understanding of the water quality. 6 papers (11%) used a method that combined monitoring data and modeling. 6 papers (11%) just applied a model to estimate the quality of water. Modeling is a useful resource when there is limited budget since some models are of free download and use. In particular, several of used models come from the U.S.A, but they have their own purposes and features, meaning that a careful application of the models to other countries and a critical discussion of the results are crucial. 5 papers (9%) focus on a method combining monitoring data and statistical analysis. When there is a huge data matrix, the researchers need an efficient way of interpretation of the information which is provided by statistics. 3 papers (5%) used a method combining monitoring data, statistical analysis and modeling. These different methods are all valuable to evaluate the water quality. It was also found that the evaluation of water quality was made as well by using other types of sampling different than water itself, and they also provide useful information to understand the condition of the water body. These additional monitoring activities are: Air sampling, sediment sampling, phytoplankton sampling and aquatic animal tissues sampling. Despite considerable progress in developing and applying control regulations to point and NPS pollution, the pollution status of rivers, lakes, and reservoirs in Asian countries is not improving. In fact, this reflects the slow pace of investment in new infrastructure for pollution control and growing population pressures. Water laws or regulations and public involvement in enforcement can play a constructive and indispensable role in environmental protection. In the near future, in order to protect water from further contamination, rapid action is highly needed to control the various kinds of effluents in one region. Environmental remediation and treatment of industrial effluent and municipal wastewaters is essential. It is also important to prevent the direct input of agricultural and mine site runoff. Finally, stricter environmental regulation for water quality is required to support protection and management strategies. It would have been possible to get further information based in the 57 sample of papers. For instance, it would have been interesting to compare the level of concentrations of some pollutants in the diferente Asian countries. However the limit of three months duration for this study prevented further work to take place. In spite of this, the study objectives were achieved: the work provided an overview of the most relevant water quality problems in rivers, lakes and reservoirs in Asian countries, and also listed and analyzed the most common methodologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Agência Financiadora: Fundação para a Ciência e a Tecnologia (FCT) - PEst-OE/FIS/UI0777/2013; CERN/FP/123580/2011; PTDC/FIS-NUC/0548/2012

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Worldwide formaldehyde is manipulated with diverse usage properties, since industrial purposes to health laboratory objectives, representing the economic importance of this chemical agent. Therefore, many people are exposed to formaldehyde environmentally and/or occupationally. Considering the latter, there was recommended occupational exposure limits based on threshold mechanisms, limit values and indoor guidelines. Formaldehyde is classified by the International Agency for Cancer Research (IARC) as carcinogenic to humans (group 1), since a wide range of epidemiological studies in occupational exposure settings have suggested possible links between the concentration and duration of exposure and elevated risks of nasopharyngeal cancer, and others cancers, and more recently, with leukemia. Although there are different classifications, such as U.S. EPA that classified formaldehyde as a B1 compound, probable human carcinogen under the conditions of unusually high or prolonged exposure, on basis of limited evidence in humans but with sufficient evidence in animals. Formaldehyde genotoxicity is well-known, being a direct-acting genotoxic compound positively associated for almost all genetic endpoints evaluated in bacteria, yeast, fungi, plants, insects, nematodes, and cultured mammalian cells. There are many human biomonitoring studies that associate formaldehyde occupational exposure to genomic instability, and consequently possible health effects. Besides the link with cancer, also other pathologies and symptoms are associated with formaldehyde exposure, namely respiratory disorders such as asthma, and allergic contact dermatitis. Nowadays, there are efforts to reduce formaldehyde exposure, namely indoor. Europe and United States developed more strict regulation regarding formaldehyde emissions from materials containing this agent. Despite the regulations and restrictions, formaldehyde still continues to be difficult to eliminate or substitute, being biomonitoring an important tool to control possible future health effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coastal low-level jets (CLLJ) are a low-tropospheric wind feature driven by the pressure gradient produced by a sharp contrast between high temperatures over land and lower temperatures over the sea. This contrast between the cold ocean and the warm land in the summer is intensified by the impact of the coastal parallel winds on the ocean generating upwelling currents, sharpening the temperature gradient close to the coast and giving rise to strong baroclinic structures at the coast. During summertime, the Iberian Peninsula is often under the effect of the Azores High and of a thermal low pressure system inland, leading to a seasonal wind, in the west coast, called the Nortada (northerly wind). This study presents a regional climatology of the CLLJ off the west coast of the Iberian Peninsula, based on a 9km resolution downscaling dataset, produced using the Weather Research and Forecasting (WRF) mesoscale model, forced by 19 years of ERA-Interim reanalysis (1989-2007). The simulation results show that the jet hourly frequency of occurrence in the summer is above 30% and decreases to about 10% during spring and autumn. The monthly frequencies of occurrence can reach higher values, around 40% in summer months, and reveal large inter-annual variability in all three seasons. In the summer, at a daily base, the CLLJ is present in almost 70% of the days. The CLLJ wind direction is mostly from north-northeasterly and occurs more persistently in three areas where the interaction of the jet flow with local capes and headlands is more pronounced. The coastal jets in this area occur at heights between 300 and 400 m, and its speed has a mean around 15 m/s, reaching maximum speeds of 25 m/s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a new simulation environment for a virtual laboratory to educational proposes is presented. The Logisim platform was adopted as the base digital simulation tool, since it has a modular implementation in Java. All the hardware devices used in the laboratory course was designed as components accessible by the simulation tool, and integrated as a library. Moreover, this new library allows the user to access an external interface. This work was motivated by the needed to achieve better learning times on co-design projects, based on hardware and software implementations, and to reduce the laboratory time, decreasing the operational costs of engineer teaching. Furthermore, the use of virtual laboratories in educational environments allows the students to perform functional tests, before they went to a real laboratory. Moreover, these functional tests allow to speed-up the learning when a problem based approach methodology is considered. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the new internet remote laboratory (IRL), constructed at Mechanical Engineering Department (MED), Instituto Superior de Engenharia de Lisboa (ISEL), to teach Industrial Automation, namely electropneumatic cycles. The aim of this work was the development and implementation of a remote laboratory that was simple and effective from the user point of view, allowing access to all its functionalities through a web browser without having to install any other program and giving access to all the features that the students can find at the physical laboratory. With this goal in mind, it has been implemented a simple architecture with the new programmable logic controller (PLC) SIEMENS S7-1200, and with the aid of several free programs, programming technologies such as JavaScript, PHP and databases, it was possible to have a remote laboratory, with a simple interface, to teach industrial automation students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The interest of the study on the implementation of expanded agglomerated cork as exterior wall covering derives from two critical factors in a perspective of sustainable development: the use of a product consisting of a renewable natural material-cork-and the concern to contribute to greater sustainability in construction. The study aims to assess the feasibility of its use by analyzing the corresponding behaviour under different conditions. Since this application is relatively recent, only about ten years old, there is still much to learn about the reliability of its long-term properties. In this context, this study aims to deepen and approach aspects, some of them poorly studied and even unknown, that deal with characteristics that will make the agglomerate a good choice for exterior wall covering. The analysis of these and other characteristics is being performed by testing both under actual exposure conditions, on an experimental cell at LNEC, and on laboratory. In this paper the main laboratory tests are presented and the obtained results are compared with the outcome of the field study. © (2015) Trans Tech Publications, Switzerland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging sensors provide image data containing both spectral and spatial information from the Earth surface. The huge data volumes produced by these sensors put stringent requirements on communications, storage, and processing. This paper presents a method, termed hyperspectral signal subspace identification by minimum error (HySime), that infer the signal subspace and determines its dimensionality without any prior knowledge. The identification of this subspace enables a correct dimensionality reduction yielding gains in algorithm performance and complexity and in data storage. HySime method is unsupervised and fully-automatic, i.e., it does not depend on any tuning parameters. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.