994 resultados para Linear Viscoelastic Materials
Resumo:
Present paper present the main results obtained in the scope of an ongoing project which aims to contribute to the valorization of a waste generated by the Portuguese oil company in construction materials. This waste is an aluminosilicate with high pozzolanic reactivity. Several different technological applications had already been tested with success both in terms of properties and compliance with the corresponding standards specifications. Namely, this project results already demonstrated that this waste can be used in traditional concrete, self-compacted concrete, mortars (renders, masonry mortar, concrete repair mortars), cement main constituent as well as alkali activated binders.
Resumo:
Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Em Angola, apenas cerca de 30% da população tem acesso à energia elétrica, nível que decresce para valores inferiores a 10% em zonas rurais mais remotas. Este problema é agravado pelo facto de, na maioria dos casos, as infraestruturas existentes se encontrarem danificadas ou não acompanharem o desenvolvimento da região. Em particular na capital angolana, Luanda que, sendo a menor província de Angola, é a que regista atualmente a maior densidade populacional. Com uma população de cerca de 5 milhões de habitantes, não só há frequentemente problemas relacionados com a falha do fornecimento de energia elétrica como há ainda uma percentagem considerável de municípios onde a rede elétrica ainda nem sequer chegou. O governo de Angola, no seu esforço de crescimento e aproveitamento das suas enormes potencialidades, definiu o setor energético como um dos fatores críticos para o desenvolvimento sustentável do país, tendo assumido que este é um dos eixos prioritários até 2016. Existem objetivos claros quanto à reabilitação e expansão das infraestruturas do setor elétrico, aumentando a capacidade instalada do país e criando uma rede nacional adequada, com o intuito não só de melhorar a qualidade e fiabilidade da rede já existente como de a aumentar. Este trabalho de dissertação consistiu no levantamento de dados reais relativamente à rede de distribuição de energia elétrica de Luanda, na análise e planeamento do que é mais premente fazer relativamente à sua expansão, na escolha dos locais onde é viável localizar novas subestações, na modelação adequada do problema real e na proposta de uma solução ótima para a expansão da rede existente. Depois de analisados diferentes modelos matemáticos aplicados ao problema de expansão de redes de distribuição de energia elétrica encontrados na literatura, optou-se por um modelo de programação linear inteira mista (PLIM) que se mostrou adequado. Desenvolvido o modelo do problema, o mesmo foi resolvido por recurso a software de otimização Analytic Solver e CPLEX. Como forma de validação dos resultados obtidos, foi implementada a solução de rede no simulador PowerWorld 8.0 OPF, software este que permite a simulação da operação do sistema de trânsito de potências.
Resumo:
Rehabilitation is becoming more and more usual in the construction sector in Portugal. The introduction of newer construction materials and technical know-how of integrating different materials for achieving desired engineering goals is an important step to the development of the sector. Wood industry is also getting more and more adapted to composite technologies with the introduction of the so called “highly engineered wood products” and with the use of modification treatments. This work is an attempt to explain the viability of using stainless steel and glass fibre reinforced polymer (GFRP) as reinforcements in wood beams. This thesis specifically focuses on the flexural behaviour of Portuguese Pine unmodified and modified wood beams. Two types of modification were used: 1,3-dimethylol-4,5- dihydroxyethyleneurea (DMDHEU) resin and amid wax. The behaviour of the material was analysed with a nonlinear model. The latter model simulates the behaviour of the reinforced wood beams under flexural loading. Small-scale beams (1:15) were experimented in flexural bending and the experimental results obtained were compared with the analytical model results. The experiments confirm the viability of the reinforcing schemes and the working procedures. Experimental results showed fair agreement with the nonlinear model. A strength increase between 15% and 80% was achieved. Stiffness increased by 40% to 50% in beams reinforced with steel but no significant increase was achieved with the glass fibre reinforcement.
Resumo:
O trabalho descrito compreende o desenvolvimento de um anticorpo plástico (MIP, do inglês Molecularly Imprinted Polymer) para o antigénio carcinoembrionário (CEA, do inglês Carcinoembriogenic Antigen) e a sua aplicação na construção de dispositivos portáteis, de tamanho reduzido e de baixo custo, tendo em vista a monitorização deste biomarcador do cancro do colo-retal em contexto Point-of-Care (POC). O anticorpo plástico foi obtido por tecnologia de impressão molecular orientada, baseada em eletropolimerização sobre uma superfície condutora de vidro recoberto por FTO. De uma forma geral, o processo foi iniciado pela electropolimerização de anilina sobre o vidro, seguindo-se a ligação por adsorção do biomarcador (CEA) ao filme de polianilina, com ou sem monómeros carregados positivamente (Cloreto de vinilbenziltrimetilamónio, VB). A última fase consistiu na electropolimerização de o-fenilenodiamina (oPD) sobre a superfície, seguindo-se a remoção da proteína por clivagem de ligações peptídicas, com o auxílio de tripsina. A eficiência da impressão do biomarcador CEA no material polimérico foi controlada pela preparação de um material análogo, NIP (do inglês, Non-Imprinted Polymer), no qual nem a proteína nem o monómero VB estavam presentes. Os materiais obtidos foram caracterizados quimicamente por técnicas de Infravermelho com Transformada de Fourier (FTIR, do inglês, Fourier Transform Infrared Spectroscopy) e microscopia confocal de Raman. Os materiais sensores preparados foram entretanto incluídos em membranas poliméricas de Poli(cloreto de vinilo) (PVC) plastificado, para construção de sensores (biomiméticos) seletivos a CEA, tendo-se avaliado a resposta analítica em diferentes meios. Obteve-se uma boa resposta potenciométrica em solução tampão de Ácido 4-(2-hidroxietil)piperazina-1-etanosulfónico (HEPES), a pH 4,4, com uma membrana seletiva baseada em MIP preparada com o monómero carregado VB. O limite de deteção foi menor do que 42 pg/mL, observando-se um comportamento linear (versus o logaritmo da concentração) até 625 pg/mL, com um declive aniónico igual a -61,9 mV/década e r2>0,9974. O comportamento analítico dos sensores biomiméticos foi ainda avaliado em urina, tendo em vista a sua aplicação na análise de CEA em urina. Neste caso, o limite de deteção foi menor do que 38 pg/mL, para uma resposta linear até 625 pg/mL, com um declive de -38,4 mV/década e r2> 0,991. De uma forma geral, a aplicação experimental dos sensores biomiméticos evidenciou respostas exatas, sugerindo que os biossensores desenvolvidos prossigam estudos adicionais tendo em vista a sua aplicação em amostras de indivíduos doentes.
Dimensão do sector público e crescimento económico: uma relação não linear na União Europeia dos 15?
Resumo:
Os Estados-Membros da União Europeia têm tido a preocupação de reduzirem a dimensão da Administração Pública na economia, a par de a tornar muito mais eficiente de forma a promover o crescimento económico. Neste artigo analisam-se as relações entre a despesa pública e o crescimento económico em 14 Estados-Membros da União Europeia dos 15, com o objectivo de determinar a dimensão óptima das Administrações Públicas, tendo por base teórica a Curva de Armey. Os resultados, para o período 1965-2007, sugerem uma dimensão do sector público maximizadora do crescimento económico de 47,37% e 22,17% do PIB, quando avaliada pelas despesas públicas totais e o consumo público, respectivamente.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Conservação e Restauro
Resumo:
As excelentes propriedades mecânicas, associadas ao seu baixo peso, fazem com que os materiais compósitos sejam atualmente dos mais interessantes da nossa sociedade tecnológica. A crescente utilização destes materiais e a excelência dos resultados daí provenientes faz com que estes materiais sejam utilizados em estruturas complexas de responsabilidade, pelo que a sua maquinagem se torna necessária de forma a possibilitar a ligação entre peças. O processo de furação é o mais frequente. O processo de maquinagem de compósitos terá como base os métodos convencionais utilizados nos materiais metálicos. O processo deverá, no entanto, ser convenientemente adaptado, quer a nível de parâmetros, quer a nível de ferramentas a utilizar. As características dos materiais compósitos são bastante particulares pelo que, quando são sujeitos a maquinagem poderão apresentar defeitos tais como delaminação, fissuras intralaminares, arrancamento de fibras ou dano por sobreaquecimento. Para a detecção destes danos, por vezes a inspeção visual não é suficiente, sendo necessário recorrer a processos específicos de análise de danos. Existem já, alguns estudos, cujo âmbito foi a obtenção de furos de qualidade em compósitos, com minimização do dano, não se podendo comparar ainda com a informação existente, no que se refere à maquinagem de materiais metálicos ou ligas metálicas. Desta forma, existe ainda um longo caminho a percorrer, de forma a que o grau de confiança na utilização destes materiais se aproxime aos materiais metálicos. Este trabalho experimental desenvolvido nesta tese assentou essencialmente na furação de placas laminadas e posterior análise dos danos provocados por esta operação. Foi dada especial atenção à medição da delaminação causada pela furação e à resistência mecânica do material após ser maquinado. Os materiais utilizados, para desenvolver este trabalho experimental, foram placas compósitas de carbono/epóxido com duas orientações de fibras diferentes: unidireccionais e em “cross-ply”. Não se conseguiu muita informação, junto do fornecedor, das suas características pelo que se levaram a cabo ensaios que permitiram determinar o seu módulo de elasticidade. Relativamente á sua resistência â tração, como já foi referido, a grande resistência oferecida pelo material, associada às limitações da máquina de ensaios não permitiu chegar a valores conclusivos. Foram usadas três geometrias de ferramenta diferentes: helicoidal, Brad e Step. Os materiais utilizados nas ferramentas, foram o aço rápido (HSS) e o carboneto de tungsténio para as brocas helicoidais de 118º de ângulo de ponta e apenas o carboneto de tungsténio para as brocas Brad e Step. As ferramentas em diamante não foram consideradas neste trabalho, pois, embora sejam reconhecidas as suas boas características para a maquinagem de compósitos, o seu elevado custo não justifica a sua escolha, pelo menos num trabalho académico, como é o caso. As vantagens e desvantagens de cada geometria ou material utilizado foram avaliadas, tanto no que diz respeito à delaminação como á resistência mecânica dos provetes ensaiados. Para a determinação dos valores de delaminação, foi usada a técnica de Raio X. Algum conhecimento já existente relativamente a este processo permitiu definir alguns parâmetros (por exemplo: tempo de exposição das placas ao liquido contrastante), que tornaram acessível o procedimento de obtenção de imagens das placas furadas. Importando estas imagens para um software de desenho (no caso – AutoCad), foi possível medir as áreas delaminadas e chegar a valores para o fator de delaminação de cada furo efetuado. Terminado este processo, todas as placas foram sujeitas a ensaios de esmagamento, de forma a avaliar a forma como os parâmetros de maquinagem afectaram a resistência mecânica do material. De forma resumida, são objetivos deste trabalho: - Caracterizar as condições de corte em materiais compósitos, mais especificamente em fibras de carbono reforçado com matriz epóxida (PRFC); - Caracterização dos danos típicos provocados pela furação destes materiais; - Desenvolvimento de análise não destrutiva (RX) para avaliação dos danos provocados pela furação; - Conhecer modelos existentes com base na mecânica da fratura linear elástica (LEFM); - Definição de conjunto de parâmetros ideais de maquinagem com o fim de minimizar os danos resultantes da mesma, tendo em conta os resultados provenientes dos ensaios de força, da análise não destrutiva e da comparação com modelos de danos existentes e conhecidos.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Conservação e Restauro, especialização em pintura sobre tela
Resumo:
In this manuscript we tackle the problem of semidistributed user selection with distributed linear precoding for sum rate maximization in multiuser multicell systems. A set of adjacent base stations (BS) form a cluster in order to perform coordinated transmission to cell-edge users, and coordination is carried out through a central processing unit (CU). However, the message exchange between BSs and the CU is limited to scheduling control signaling and no user data or channel state information (CSI) exchange is allowed. In the considered multicell coordinated approach, each BS has its own set of cell-edge users and transmits only to one intended user while interference to non-intended users at other BSs is suppressed by signal steering (precoding). We use two distributed linear precoding schemes, Distributed Zero Forcing (DZF) and Distributed Virtual Signalto-Interference-plus-Noise Ratio (DVSINR). Considering multiple users per cell and the backhaul limitations, the BSs rely on local CSI to solve the user selection problem. First we investigate how the signal-to-noise-ratio (SNR) regime and the number of antennas at the BSs impact the effective channel gain (the magnitude of the channels after precoding) and its relationship with multiuser diversity. Considering that user selection must be based on the type of implemented precoding, we develop metrics of compatibility (estimations of the effective channel gains) that can be computed from local CSI at each BS and reported to the CU for scheduling decisions. Based on such metrics, we design user selection algorithms that can find a set of users that potentially maximizes the sum rate. Numerical results show the effectiveness of the proposed metrics and algorithms for different configurations of users and antennas at the base stations.
Resumo:
This work uses surface imprinting to design a novel smart plastic antibodymaterial (SPAM) for Haemoglobin (Hb). Charged binding sites are described here for the first time to tailor plastic antibody nanostructures for a large size protein such as Hb. Its application to design small, portable and low cost potentiometric devices is presented. The SPAM material was obtained by linking Hb to silica nanoparticles and allowing its ionic interaction with charged vinyl monomers. A neutral polymeric matrix was created around these and the imprinted protein removed. Additional materials were designed in parallel acting as a control: a neutral imprinted material (NSPAM), obtained by removing the charged monomers from the procedure, and the Non-Imprinted (NI) versions of SPAM and NSPAM by removing the template. SEM analysis confirmed the surface modification of the silica nanoparticles. All materials were mixed with PVC/plasticizer and applied as selective membranes in potentiometric transduction. Electromotive force (emf) variations were detected only for selective membranes having a lipophilic anionic additive in the membrane. The presence of Hb inside these membranes was evident and confirmed by FTIR, optical microscopy and Raman spectroscopy. The best performance was found for SPAM-based selective membranes with an anionic lipophilic additive, at pH 5. The limits of detection were 43.8 mg mL 1 and linear responses were obtained down to 83.8 mg mL 1, with an average cationic slope of +40 mV per decade. Good selectivity was also observed against other coexisting biomolecules. The analytical application was conducted successfully, showing accurate and precise results.
Resumo:
This work presents a novel surface Smart Polymer Antibody Material (SPAM) for Carnitine (CRT, a potential biomarker of ovarian cancer), tested for the first time as ionophore in potentiometric electrodes of unconventional configuration. The SPAM material consisted of a 3D polymeric network created by surface imprinting on graphene layers. The polymer was obtained by radical polymerization of (vinylbenzyl) trimethylammonium chloride and 4-styrenesulfonic acid (signaling the binding sites), and vinyl pivalate and ethylene glycol dimethacrylate (surroundings). Non-imprinted material (NIM) was prepared as control, by excluding the template from the procedure. These materials were then used to produce several plasticized PVC membranes, testing the relevance of including the SPAM as ionophore, and the need for a charged lipophilic additive. The membranes were casted over solid conductive supports of graphite or ITO/FTO. The effect of pH upon the potentiometric response was evaluated for different pHs (2-9) with different buffer compositions. Overall, the best performance was achieved for membranes with SPAM ionophore, having a cationic lipophilic additive and tested in HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid) buffer, pH 5.1. Better slopes were achieved when the membrane was casted on conductive glass (-57.4 mV/decade), while the best detection limits were obtained for graphite-based conductive supports (3.6 × 10−5mol/L). Good selectivity was observed against BSA, ascorbic acid, glucose, creatinine and urea, tested for concentrations up to their normal physiologic levels in urine. The application of the devices to the analysis of spiked samples showed recoveries ranging from 91% (± 6.8%) to 118% (± 11.2%). Overall, the combination of the SPAM sensory material with a suitable selective membrane composition and electrode design has lead to a promising tool for point-of-care applications.