991 resultados para Processing Element Array
Resumo:
Wireless communications had a great development in the last years and nowadays they are present everywhere, public and private, being increasingly used for different applications. Their application in the business of sports events as a means to improve the experience of the fans at the games is becoming essential, such as sharing messages and multimedia material on social networks. In the stadiums, given the high density of people, the wireless networks require very large data capacity. Hence radio coverage employing many small sized sectors is unavoidable. In this paper, an antenna is designed to operate in the Wi-Fi 5GHz frequency band, with a directive radiation pattern suitable to this kind of applications. Furthermore, despite the large bandwidth and low losses, this antenna has been developed using low cost, off-the-shelf materials without sacrificing quality or performance, essential to mass production. © 2015 EurAAP.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.
Resumo:
In this work, plasticizer agents were incorporated in a chitosan based formulation, as a strategy to improve the fragile structure of chitosan based-materials. Three different plasticizers: ethylene glycol, glycerol and sorbitol, were blended with chitosan to prepare 3D dense chitosan specimens. The properties of the obtained structures were assessed for mechanical, microstructural, physical and biocompatibility behavior. The results obtained revealed that from the different specimens prepared, the blend of chitosan with glycerol has superior mechanical properties and good biological behavior, making this chitosan based formulation a good candidate to improve robust chitosan structures for the construction of bioabsorbable orthopedic implants.
Resumo:
Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and Technology
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
Dissertation presented to obtain a Ph.D. degree in Engineering and Technology Sciences, Biotechnology at the Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa
Resumo:
In this paper we address an order processing optimization problem known as minimization of open stacks (MOSP). We present an integer pro gramming model, based on the existence of a perfect elimination scheme in interval graphs, which finds an optimal sequence for the costumers orders.
Resumo:
Presented at Faculdade de Ciências e Tecnologias, Universidade de Lisboa, to obtain the Master Degree in Conservation and Restoration of Textiles
Resumo:
Accumulation of microcystin-LR (MC-LR) in edible aquatic organisms, particularly in bivalves, is widely documented. In this study, the effects of food storage and processing conditions on the free MC-LR concentration in clams (Corbicula fluminea) fed MC-LR-producing Microcystisaeruginosa (1 × 105 cell/mL) for four days, and the bioaccessibility of MC-LR after in vitro proteolytic digestion were investigated. The concentration of free MC-LR in clams decreased sequentially over the time with unrefrigerated and refrigerated storage and increased with freezing storage. Overall, cooking for short periods of time resulted in a significantly higher concentration (P < 0.05) of free MC-LR in clams, specifically microwave (MW) radiation treatment for 0.5 (57.5%) and 1 min (59%) and boiling treatment for 5 (163.4%) and 15 min (213.4%). The bioaccessibility of MC-LR after proteolytic digestion was reduced to 83%, potentially because of MC-LR degradation by pancreatic enzymes. Our results suggest that risk assessment based on direct comparison between MC-LR concentrations determined in raw food products and the tolerable daily intake (TDI) value set for the MC-LR might not be representative of true human exposure.
Resumo:
PLos One, 4(11): ARTe7722
Resumo:
PLOS ONE, 4(8):ARTe6820
Resumo:
Com a necessidade de encontrar uma forma de ligar componentes de forma mais vantajosa, surgiram as ligações adesivas. Nos últimos anos, a utilização de juntas adesivas em aplicações industriais tem vindo a aumentar, substituindo alguns métodos de ligação tradicionais, por apresentarem vantagens tais como, redução de concentração de tensões, reduzido peso e facilidade de processamento/fabrico. O seu estudo permite prever a sua resistência e durabilidade. Este trabalho refere-se ao estudo de juntas de sobreposição simples (JSS), nas quais são aplicados os adesivos comerciais que variam desde frágeis e rígidos, como o caso do Araldite® AV138, até adesivos mais dúcteis, como o Araldite® 2015 e o Sikaforce® 7888. Estes são aplicados em substratos de alumínio (AL6082-T651) em juntas com diferentes geometrias e diferentes comprimentos de sobreposição (L), sendo sujeitos a esforços de tracção. Foi feita uma análise dos valores experimentais fornecidos e uma posterior comparação destes com diferentes métodos numéricos baseados em Elementos Finitos (EF). A comparação foi feita por uma análise de Modelos de Dano Coesivo (MDC) e segundo os critérios baseados em tensões e deformações do Método de Elementos Finitos Extendido (MEFE). A utilização destes métodos numéricos capazes de simular o comportamento das juntas poderá levar a uma poupança de recursos e de tempo. A análise por MDC revelou que este método é bastante preciso, excepto para os adesivos que sejam bastante dúcteis. A aplicação de uma outra lei coesiva pode solucionar esse problema. Por sua vez a análise por MEFE demonstrou que esta técnica não é particularmente adequada para o crescimento de dano em modo misto e, comparativamente com o MDC, a sua precisão é bastante inferior.
Resumo:
Maintaining a high level of data security with a low impact on system performance is more challenging in wireless multimedia applications. Protocols that are used for wireless local area network (WLAN) security are known to significantly degrade performance. In this paper, we propose an enhanced security system for a WLAN. Our new design aims to decrease the processing delay and increase both the speed and throughput of the system, thereby making it more efficient for multimedia applications. Our design is based on the idea of offloading computationally intensive encryption and authentication services to the end systems’ CPUs. The security operations are performed by the hosts’ central processor (which is usually a powerful processor) before delivering the data to a wireless card (which usually has a low-performance processor). By adopting this design, we show that both the delay and the jitter are significantly reduced. At the access point, we improve the performance of network processing hardware for real-time cryptographic processing by using a specialized processor implemented with field-programmable gate array technology. Furthermore, we use enhanced techniques to implement the Counter (CTR) Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) and the CTR protocol. Our experiments show that it requires timing in the range of 20–40 μs to perform data encryption and authentication on different end-host CPUs (e.g., Intel Core i5, i7, and AMD 6-Core) as compared with 10–50 ms when performed using the wireless card. Furthermore, when compared with the standard WiFi protected access II (WPA2), results show that our proposed security system improved the speed to up to 3.7 times.
Resumo:
Using low cost portable devices that enable a single analytical step for screening environmental contaminants is today a demanding issue. This concept is here tried out by recycling screen-printed electrodes that were to be disposed of and by choosing as sensory element a low cost material offering specific response for an environmental contaminant. Microcystins (MCs) were used as target analyte, for being dangerous toxins produced by cyanobacteria released into water bodies. The sensory element was a plastic antibody designed by surface imprinting with carefully selected monomers to ensure a specific response. These were designed on the wall of carbon nanotubes, taking advantage of their exceptional electrical properties. The stereochemical ability of the sensory material to detect MCs was checked by preparing blank materials where the imprinting stage was made without the template molecule. The novel sensory material for MCs was introduced in a polymeric matrix and evaluated against potentiometric measurements. Nernstian response was observed from 7.24 × 10−10 to 1.28 × 10−9 M in buffer solution (10 mM HEPES, 150 mM NaCl, pH 6.6), with average slopes of −62 mVdecade−1 and detection capabilities below 1 nM. The blank materials were unable to provide a linear response against log(concentration), showing only a slight potential change towards more positive potentials with increasing concentrations (while that ofthe plastic antibodies moved to more negative values), with a maximum rate of +33 mVdecade−1. The sensors presented good selectivity towards sulphate, iron and ammonium ions, and also chloroform and tetrachloroethylene (TCE) and fast response (<20 s). This concept was successfully tested on the analysis of spiked environmental water samples. The sensors were further applied onto recycled chips, comprehending one site for the reference electrode and two sites for different selective membranes, in a biparametric approach for “in situ” analysis.