102 resultados para Receptor sensor de cálcio
Resumo:
The IEEE 802.15.4 has been adopted as a communication protocol standard for Low-Rate Wireless Private Area Networks (LRWPANs). While it appears as a promising candidate solution for Wireless Sensor Networks (WSNs), its adequacy must be carefully evaluated. In this paper, we analyze the performance limits of the slotted CSMA/CA medium access control (MAC) mechanism in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility and potential for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent), the number of nodes and the data frame size on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We also analytically evaluate the impact of the slotted CSMA/CA overheads on the saturation throughput. We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
This project was developed within the ART-WiSe framework of the IPP-HURRAY group (http://www.hurray.isep.ipp.pt), at the Polytechnic Institute of Porto (http://www.ipp.pt). The ART-WiSe – Architecture for Real-Time communications in Wireless Sensor networks – framework (http://www.hurray.isep.ipp.pt/art-wise) aims at providing new communication architectures and mechanisms to improve the timing performance of Wireless Sensor Networks (WSNs). The architecture is based on a two-tiered protocol structure, relying on existing standard communication protocols, namely IEEE 802.15.4 (Physical and Data Link Layers) and ZigBee (Network and Application Layers) for Tier 1 and IEEE 802.11 for Tier 2, which serves as a high-speed backbone for Tier 1 without energy consumption restrictions. Within this trend, an application test-bed is being developed with the objectives of implementing, assessing and validating the ART-WiSe architecture. Particularly for the ZigBee protocol case; even though there is a strong commercial lobby from the ZigBee Alliance (http://www.zigbee.org), there is neither an open source available to the community for this moment nor publications on its adequateness for larger-scale WSN applications. This project aims at fulfilling these gaps by providing: a deep analysis of the ZigBee Specification, mainly addressing the Network Layer and particularly its routing mechanisms; an identification of the ambiguities and open issues existent in the ZigBee protocol standard; the proposal of solutions to the previously referred problems; an implementation of a subset of the ZigBee Network Layer, namely the association procedure and the tree routing on our technological platform (MICAz motes, TinyOS operating system and nesC programming language) and an experimental evaluation of that routing mechanism for WSNs.
Resumo:
The recently standardized IEEE 802.15.4/Zigbee protocol stack offers great potentials for ubiquitous and pervasive computing, namely for Wireless Sensor Networks (WSNs). However, there are still some open and ambiguous issues that turn its practical use a challenging task. One of those issues is how to build a synchronized multi-hop cluster-tree network, which is quite suitable for QoS support in WSNs. In fact, the current IEEE 802.15.4/Zigbee specifications restrict the synchronization in the beacon-enabled mode (by the generation of periodic beacon frames) to star-based networks, while it supports multi-hop networking using the peer-to-peer mesh topology, but with no synchronization. Even though both specifications mention the possible use of cluster-tree topologies, which combine multi-hop and synchronization features, the description on how to effectively construct such a network topology is missing. This report tackles this problem, unveils the ambiguities regarding the use of the cluster-tree topology and proposes two collisionfree beacon frame scheduling schemes.
Resumo:
Structural health monitoring has long been identified as a prominent application of Wireless Sensor Networks (WSNs), as traditional wired-based solutions present some inherent limitations such as installation/maintenance cost, scalability and visual impact. Nevertheless, there is a lack of ready-to-use and off-the-shelf WSN technologies that are able to fulfill some most demanding requirements of these applications, which can span from critical physical infrastructures (e.g. bridges, tunnels, mines, energy grid) to historical buildings or even industrial machinery and vehicles. Low-power and low-cost yet extremely sensitive and accurate accelerometer and signal acquisition hardware and stringent time synchronization of all sensors data are just examples of the requirements imposed by most of these applications. This paper presents a prototype system for health monitoring of civil engineering structures that has been jointly conceived by a team of civil, and electrical and computer engineers. It merges the benefits of standard and off-the-shelf (COTS) hardware and communication technologies with a minimum set of custom-designed signal acquisition hardware that is mandatory to fulfill all application requirements.
Resumo:
In this paper a new method for the calculation of the fractional expressions in the presence of sensor redundancy and noise, is presented. An algorithm, taking advantage of the signal characteristics and the sensor redundancy, is tuned and optimized through genetic algorithms. The results demonstrate the good performance for different types of expressions and distinct levels of noise.
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. To test the impacts, a flexible beam is clamped to the end-effector of a manipulator that is programmed in a way such that the rod moves against a rigid surface. Eighteen signals are captured and theirs correlation are calculated. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
Nowadays the incredible grow of mobile devices market led to the need for location-aware applications. However, sometimes person location is difficult to obtain, since most of these devices only have a GPS (Global Positioning System) chip to retrieve location. In order to suppress this limitation and to provide location everywhere (even where a structured environment doesn’t exist) a wearable inertial navigation system is proposed, which is a convenient way to track people in situations where other localization systems fail. The system combines pedestrian dead reckoning with GPS, using widely available, low-cost and low-power hardware components. The system innovation is the information fusion and the use of probabilistic methods to learn persons gait behavior to correct, in real-time, the drift errors given by the sensors.
Resumo:
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.
Resumo:
A Norfloxacina (NFX) é um antibiótico antibacteriano indicado para combater bactérias Gram-negativas e amplamente utilizado para o tratamento de infeções no trato respiratório e urinário. Com a necessidade de realizar estudos clínicos e farmacológicos esenvolveram-se métodos de análise rápida e sensitiva para a determinação da Norfloxacina. Neste trabalho foi desenvolvido um novo sensor eletroquímico sensível e seletivo para a deteção da NFX. O sensor foi construído a partir de modificações efetuadas num elétrodo de carbono vítreo. Inicialmente o elétrodo foi modificado com a deposição de uma suspensão de nanotubos de carbono de paredes múltiplas (MWCNT) de modo a aumentar a sensibilidade de resposta analítica. De seguida um filme polímerico molecularmente impresso (MIP) foi preparado por eletrodeposição, a partir de uma solução contendo pirrol (monómero funcional) e NFX (template). Um elétrodo de controlo não impresso foi também preparado (NIP). Estudouse e caraterizou-se a resposta eletroquímica do sensor para a oxidação da NFX por voltametria de onda quadrada. Foram optimizados diversos parâmetros experimentais, tais como, condições ótimas de polimerização, condições de incubação e condições de extração. O sensor apresenta um comportamento linear entre a intensidade da corrente do pico e o logaritmo da concentração de NFX na gama entre 0,1 e 8μM. Os resultados obtidos apresentam boa precisão, com repetibilidade inferior a 6% e reprodutibilidade inferior a 9%. Foi calculado a partir da curva de calibração um limite de deteção de 0,2 μM O método desenvolvido é seletivo, rápido e de fácil manuseamento. O sensor molecularmente impresso foi aplicado com sucesso na deteção da NFX em amostras de urina real e água.
Resumo:
O instável mas tendencialmente crescente preço dos combustíveis associado a preocupações ambientais cada vez mais enraizadas nas sociedades, têm vindo a despoletar uma maior atenção à procura de combustíveis alternativos. Por outro lado, várias projecções indicam um aumento muito acentuado do consumo energético global no curto prazo, fruto do aumento da população e do nível de industrialização das sociedades. Neste contexto, o biodiesel (ésteres de ácidos gordos) obtido através da transesterificação de triglicerídeos de origem vegetal ou animal, surge como a alternativa “verde” mais viável para utilização em equipamentos de combustão. A reacção de transesterificação é catalisada, por norma com recurso a catalisadores homogéneos alcalinos (NaOH ou KOH). Este tipo de processo, o único actualmente com expressão a nível industrial, apresenta algumas desvantagens que, para além de aumentarem o custo do produto final, contribuem para reduzir a benignidade do mesmo: a impossibilidade de reutilização do catalisador, o aumento do número e complexidade das etapas de separação e a produção de efluentes resultantes das referidas etapas. Com o intuito de minimizar ou eliminar estes problemas, vários catalisadores heterogéneos têm vindo a ser estudados para esta reacção. Apesar de muitos apresentarem resultados promissores, a grande maioria não tem viabilidade para aplicação industrial seja devido ao seu próprio custo, seja devido aos pré-tratamentos necessários à sua utilização. Entre estes catalisadores, o óxido de cálcio é talvez o que apresenta resultados mais promissores. O crescente número de estudos envolvendo este catalisador em detrimento de outros, é por si mesmo prova do potencial do CaO. A realização deste trabalho pretendia atingir os seguintes objectivos principais: • Avaliar a elegibilidade do óxido de cálcio enquanto catalisador da reacção de transesterificação de óleos alimentares usados com metanol; • Avaliar qual a sua influência nas características dos produtos finais; • Avaliar as diferenças de performance entre o óxido de cálcio activado em atmosfera inerte (N2) e em ar, enquanto catalisadores da reacção de transesterificação de óleos alimentares usados com metanol; • Optimizar as condições da reacção com recurso às ferramentas matemáticas disponibilizadas pelo planeamento factorial, através da variação de quatro factores chave de influência: temperatura, tempo, relação metanol / óleo e massa de catalisador utilizado. O CaO utlizado foi obtido a partir de carbonato de cálcio calcinado numa mufla a 750 °C durante 3 h. Foi posteriormente activado a 900 °C durante 2h, em atmosferas diferentes: azoto (CaO-N2) e ar (CaO-Ar). Avaliaram-se algumas propriedades dos catalisadores assim preparados, força básica, concentração de centros activos e áreas específicas, tendo-se obtido uma força básica situada entre 12 e 14 para ambos os catalisadores, uma concentração de centros activos de 0,0698 mmol/g e 0,0629 mmol/g e áreas específicas de 10 m2/g e 11 m2/g respectivamente para o CaO-N2 e CaO-Ar. Efectuou-se a transesterificação, com catálise homogénea, da mistura de óleos usados utilizada neste trabalho com o objectivo de determinar os limites para o teor de FAME’s (abreviatura do Inglês de Fatty Acid Methyl Esters’) que se poderiam obter. Foi este o parâmetro avaliado em cada uma das amostras obtidas por catálise heterogénea. Os planos factoriais realizados tiveram como objectivo maximizar a sua quantidade recorrendo à relação ideal entre tempo de reacção, temperatura, massa de catalisador e quantidade de metanol. Verificou-se que o valor máximo de FAME’s obtidos a partir deste óleo estava situado ligeiramente acima dos 95 % (m/m). Realizaram-se três planos factoriais com cada um dos catalisadores de CaO até à obtenção das condições óptimas para a reacção. Não se verificou influência significativa da relação entre a quantidade de metanol e a massa de óleo na gama de valores estudada, pelo que se fixou o valor deste factor em 35 ml de metanol / 85g de óleo (relação molar aproximada de 8:1). Verificou-se a elegibilidade do CaO enquanto catalisador para a reacção estudada, não se tendo observado diferenças significativas entre a performance do CaO-N2 e do CaO-Ar. Identificaram-se as condições óptimas para a reacção como sendo os valores de 59 °C para a temperatura, 3h para o tempo e 1,4 % de massa de catalisador relativamente à massa de óleo. Nas referidas condições, obtiveram-se produtos com um teor de FAME’s de 95,7 % na catálise com CaO-N2 e 95,3 % na catálise com CaO-Ar. Alguns autores de estudos consultados no desenvolvimento do presente trabalho, referiam como principal problema da utilização do CaO, a lixiviação de cálcio para os produtos obtidos. Este facto foi confirmado no presente trabalho e na tentativa de o contornar, tentou-se promover a carbonatação do cálcio com a passagem de ar comprimido através dos produtos e subsequente filtração. Após a realização deste tratamento, não mais se observaram alterações nas suas propriedades (aparecimento de turvação ou precipitados), no entanto, nos produtos obtidos nas condições óptimas, a concentração de cálcio determinada foi de 527 mg/kg no produto da reacção catalisada com CaO-N2 e 475 mg/kg com CaO-A. O óxido de cálcio apresentou-se como um excelente catalisador na transesterificação da mistura de óleos alimentares usados utilizada no presente trabalho, apresentando uma performance ao nível da obtida por catálise homogénea básica. Não se observaram diferenças significativas de performance entre o CaO-N2 e o CaO-Ar, sendo possível obter nas mesmas condições reaccionais produtos com teores de FAME’s superiores a 95 % utilizando qualquer um deles como catalisador. O elevado teor de cálcio lixiviado observado nos produtos, apresenta-se como o principal obstáculo à aplicação a nível industrial do óxido de cálcio como catalisador para a transesterificação de óleos.
Resumo:
Este trabalho descreve o desenvolvimento de um material sensor para creatinina por impressão molecular em estrutura polimérica (MIP) e a sua aplicação no desenvolvimento de um dispositivo de natureza potenciométrica para a determinação da molécula alvo em fluidos biológicos. A creatinina é um dos biomarcadores mais utilizados no acompanhamento da doença renal, já que é um bom indicador da taxa de filtração glomerular (TFG). Os materiais biomiméticos desenhados para interação com a creatinina foram obtidos por polimerização radicalar, recorrendo a monómeros de ácido metacríclico ou de vinilpiridina e a um agente de reticulação apropriado. De modo a aferir o efeito da impressão da creatinina na resposta dos materiais MIP à sua presença, foram também preparados e avaliados materiais de controlo, obtidos sem impressão molecular (NIP). O controlo da constituição química destes materiais, incluindo a extração da molécula impressa, foi realizado por Espectroscopia de Raman e de Infravermelho com Transformada de Fourrier. A afinidade de ligação entre estes materiais e a creatinina foi também avaliada com base em estudos cinéticos. Todos os materiais descritos foram integrados em membranas selectivas de elétrodos seletivos de ião, preparadas sem ou com aditivo iónico lipófilo, de carga negativa ou positiva. A avaliação das características gerais de funcionamento destes elétrodos, em meios de composição e pH diferentes, indicaram que as membranas com materiais impressos e aditivo aniónico eram as únicas com utilidade analítica. Os melhores resultados foram obtidos em solução tampão Piperazine-N,N′-bis(2- ethanesulfonic acid), PIPES, de pH 2,8, condição que permitiu obter uma resposta quasi-Nernstiana, a partir de 1,6×10-5 mol L-1. Estes elétrodos demonstraram ainda uma boa selectividade ao apresentaram uma resposta preferencial para a creatinina quando na presença de ureia, carnitina, glucose, ácido ascórbico, albumina, cloreto de cálcio, cloreto de potássio, cloreto de sódio e sulfato de magnésio. Os elétrodos foram ainda aplicados com sucesso na análise de amostras sintéticas de urina, quando os materiais sensores eram baseados em ácido metacrilico, e soro, quando os materiais sensores utilizados eram baseados em vinilpiridina.
Resumo:
Ammonia is an important gas in many power plants and industrial processes so its detection is of extreme importance in environmental monitoring and process control due to its high toxicity. Ammonia’s threshold limit is 25 ppm and the exposure time limit is 8 h, however exposure to 35 ppm is only secure for 10 min. In this work a brief introduction to ammonia aspects are presented, like its physical and chemical properties, the dangers in its manipulation, its ways of production and its sources. The application areas in which ammonia gas detection is important and needed are also referred: environmental gas analysis (e.g. intense farming), automotive-, chemical- and medical industries. In order to monitor ammonia gas in these different areas there are some requirements that must be attended. These requirements determine the choice of sensor and, therefore, several types of sensors with different characteristics were developed, like metal oxides, surface acoustic wave-, catalytic-, and optical sensors, indirect gas analyzers, and conducting polymers. All the sensors types are described, but more attention will be given to polyaniline (PANI), particularly to its characteristics, syntheses, chemical doping processes, deposition methods, transduction modes, and its adhesion to inorganic materials. Besides this, short descriptions of PANI nanostructures, the use of electrospinning in the formation of nanofibers/microfibers, and graphene and its characteristics are included. The created sensor is an instrument that tries to achieve a goal of the medical community in the control of the breath’s ammonia levels being an easy and non-invasive method for diagnostic of kidney malfunction and/or gastric ulcers. For that the device should be capable to detect different levels of ammonia gas concentrations. So, in the present work an ammonia gas sensor was developed using a conductive polymer composite which was immobilized on a carbon transducer surface. The experiments were targeted to ammonia measurements at ppb level. Ammonia gas measurements were carried out in the concentration range from 1 ppb to 500 ppb. A commercial substrate was used; screen-printed carbon electrodes. After adequate surface pre-treatment of the substrate, its electrodes were covered by a nanofibrous polymeric composite. The conducting polyaniline doped with sulfuric acid (H2SO4) was blended with reduced graphene oxide (RGO) obtained by wet chemical synthesis. This composite formed the basis for the formation of nanofibers by electrospinning. Nanofibers will increase the sensitivity of the sensing material. The electrospun PANI-RGO fibers were placed on the substrate and then dried at ambient temperature. Amperometric measurements were performed at different ammonia gas concentrations (1 to 500 ppb). The I-V characteristics were registered and some interfering gases were studied (NO2, ethanol, and acetone). The gas samples were prepared in a custom setup and were diluted with dry nitrogen gas. Electrospun nanofibers of PANI-RGO composite demonstrated an enhancement in NH3 gas detection when comparing with only electrospun PANI nanofibers. Was visible higher range of resistance at concentrations from 1 to 500 ppb. It was also observed that the sensor had stable, reproducible and recoverable properties. Moreover, it had better response and recovery times. The new sensing material of the developed sensor demonstrated to be a good candidate for ammonia gas determination.
Resumo:
In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, we consider the use of n out of m codes and data dispersal in combination to in-network storage. In particular, we provide an abstract model of in-network storage to show how n out of m codes can be used, and we discuss how this can be achieved in five cases of study. We also define a model aimed at evaluating the probability of correct data encoding and decoding, we exploit this model and simulations to show how, in the cases of study, the parameters of the n out of m codes and the network should be configured in order to achieve correct data coding and decoding with high probability.