7 resultados para 149-898B
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Eucalyptus globulus heartwood, sapwood and their delignified samples by kraft pulping at 130, 150 and 170 degrees C along time were characterized in respect to total carbohydrates by Py-GC/MS(FID). No significant differences between heartwood and sapwood were found in relation to pyrolysis products and composition. The main wood carbohydrate derived pyrolysis compounds were levoglucosan (25.1%), hydroxyacetaldehyde (12.5%), 2-oxo-propanal (10.3%) and acetic acid (8.7%). Levoglucosan decreased during the early stages of delignification and increased during the bulk and residual phases. Acetic acid decreased hydroxyacetaldehyde and 2-oxo-propanal increased, and 2-furaldehyde and hydroxypropanone remained almost constant during delignification. The C/L ratio was 3.2 in wood and remained rather constant in the first pulping periods until a loss of 15-25% in carbohydrate and 60% in lignin. Afterwards it increased sharply until 44 that correspond to the removal of 25-35% of carbohydrates and 95% of lignin. The pulping reactive selectivity to lignin vs. polysaccharides was the same for sapwood and heartwood. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper presents the new internet remote laboratory (IRL), constructed at Mechanical Engineering Department (MED), Instituto Superior de Engenharia de Lisboa (ISEL), to teach Industrial Automation, namely electropneumatic cycles. The aim of this work was the development and implementation of a remote laboratory that was simple and effective from the user point of view, allowing access to all its functionalities through a web browser without having to install any other program and giving access to all the features that the students can find at the physical laboratory. With this goal in mind, it has been implemented a simple architecture with the new programmable logic controller (PLC) SIEMENS S7-1200, and with the aid of several free programs, programming technologies such as JavaScript, PHP and databases, it was possible to have a remote laboratory, with a simple interface, to teach industrial automation students.
Resumo:
The behavior of copper(II) complexes of pentane-2,4-dione and 1,1,1,5,5,5-hexafluoro-2,4-pentanedione, [Cu(acac)(2) (1) and [Cu(HFacac)(2)(H2O)] (2), in ionic liquids and molecular organic solvents, was studied by spectroscopic and electrochemical techniques. The electron paramagnetic resonance characterization (EPR) showed well-resolved spectra in most solvents. In general the EPR spectra of [Cu(acac)(2)] show higher g(z) values and lower hyperfine coupling constants, A(z), in ionic liquids than in organic solvents, in agreement with longer Cu-O bond lengths and higher electron charge in the copper ion in the ionic liquids, suggesting coordination of the ionic liquid anions. For [Cu(HFacac)(2)(H2O)] the opposite was observed suggesting that in ionic liquids there is no coordination of the anions and that the complex is tetrahedrically distorted. The redox properties of the Cu(II) complexes were investigated by cyclic voltammetry (CV) at a Pt electrode (d = 1 mm), in bmimBF(4) and bmimNTf(2) ionic liquids and, for comparative purposes, in neat organic solvents. The neutral copper(II) complexes undergo irreversible reductions to Cu(I) and Cu(0) species in both ILs and common organic solvents (CH2Cl2 or acetonitrile), but, in ILs, they are usually more easier to reduce (less cathodic reduction potential) than in the organic solvents. Moreover, 1 and 2 are easier to reduce in bmimNTf(2) than in bmimBF(4) ionic liquid. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Neste artigo, pretendemos identificar tipos de representação usados pelos alunos na resolução de duas tarefas que apresentam problemas de transformação, e através da sua análise, discutir o seu papel bem como alguns dos aspetos do raciocínio quantitativo aditivo dos alunos. Começando por discutir o que se entende por raciocínio quantitativo aditivo e por representação matemática, apresentamos depois alguns resultados empíricos no contexto de uma experiência de ensino desenvolvida numa escola pública. Os resultados evidenciam a complexidade inerente ao raciocínio inversivo presente nas duas situações propostas aos alunos. A maioria dos alunos utiliza preferencialmente a representação simbólica, recorrendo também à linguagem oral e escrita como forma de exprimir o significado atribuído às suas resoluções. A representação icónica foi usada apenas por um par de alunos, parecendo ter sido utilizada numa situação inicial de incompreensão do problema, e após registos simbólicos iniciais apagados pelos alunos em causa. O uso da linha numérica vazia e a disposição tabelar constituíram modelos de pensar auxiliando a lidar com a transformação inversa. As representações assumiram um duplo papel, o de serem meios de compreensão do raciocínio dos alunos, e também suportes do desenvolvimento do seu pensamento matemático.
Resumo:
The integrated numerical tool SWAMS (Simulation of Wave Action on Moored Ships) is used to simulate the behavior of a moored container carrier inside Sines’ Harbour. Wave, wind, currents, floating ship and moorings interaction is discussed. Several case scenarios are compared differing in the layout of the harbour and wind and wave conditions. The several harbour layouts correspond to proposed alternatives for the future expansion of Sines’ terminal XXI that include the extension of the East breakwater and of the quay. Additionally, the influence of wind on the behavior of the ship moored and the introduction of pre tensioning the mooring lines was analyzed. Hydrodynamic forces acting on the ship are determined using a modified version of the WAMIT model. This modified model utilizes the Haskind relations and the non-linear wave field inside the harbour obtained with finite element numerical model, BOUSS-WMH (Boussinesq Wave Model for Harbors) to get the wave forces on the ship. The time series of the moored ship motions and forces on moorings are obtained using BAS solver. © 2015 Taylor & Francis Group, London.
Resumo:
The capability to anticipate a contact with another device can greatly improve the performance and user satisfaction not only of mobile social network applications but of any other relying on some form of data harvesting or hoarding. One of the most promising approaches for contact prediction is to extrapolate from past experiences. This paper investigates the recurring contact patterns observed between groups of devices using an 8-year dataset of wireless access logs produced by more than 70000 devices. This effort permitted to model the probabilities of occurrence of a contact at a predefined date between groups of devices using a power law distribution that varies according to neighbourhood size and recurrence period. In the general case, the model can be used by applications that need to disseminate large datasets by groups of devices. As an example, the paper presents and evaluates an algorithm that provides daily contact predictions, based on the history of past pairwise contacts and their duration. Copyright © 2015 ICST.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.