997 resultados para STORED NOX


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thesis to obtain the Master Degree in Electronics and Telecommunications Engineering

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cleaning of syngas is one of the most important challenges in the development of technologies based on gasification of biomass. Tar is an undesired byproduct because, once condensed, it can cause fouling and plugging and damage the downstream equipment. Thermochemical methods for tar destruction, which include catalytic cracking and thermal cracking, are intrinsically attractive because they are energetically efficient and no movable parts are required nor byproducts are produced. The main difficulty with these methods is the tendency for tar to polymerize at high temperatures. An alternative to tar removal is the complete combustion of the syngas in a porous burner directly as it leaves the particle capture system. In this context, the main aim of this study is to evaluate the destruction of the tar present in the syngas from biomass gasification by combustion in porous media. A gas mixture was used to emulate the syngas, which included toluene as a tar surrogate. Initially, CHEMKIN was used to assess the potential of the proposed solution. The calculations revealed the complete destruction of the tar surrogate for a wide range of operating conditions and indicated that the most important reactions in the toluene conversion are C6H5CH3 + OH <-> C6H5CH2 + H2O, C6H5CH3 + OH <-> C6H4CH3 + H2O, and C6H5CH3 + O <-> OC6H4CH3 + H and that the formation of toluene can occur through C6H5CH2 + H <-> C6H5CH3. Subsequently, experimental tests were performed in a porous burner fired with pure methane and syngas for two equivalence ratios and three flow velocities. In these tests, the toluene concentration in the syngas varied from 50 to 200 g/Nm(3). In line with the CHEMKIN calculations, the results revealed that toluene was almost completely destroyed for all tested conditions and that the process did not affect the performance of the porous burner regarding the emissions of CO, hydrocarbons, and NOx.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Engenharia Informática - Área de Especialização em Sistemas Gráficos e Multimédia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Na atualidade, existe uma quantidade de dados criados diariamente que ultrapassam em muito as mais otimistas espectativas estabelecidas na década anterior. Estes dados têm origens bastante diversas e apresentam-se sobre várias formas. Este novo conceito que dá pelo nome de Big Data está a colocar novos e rebuscados desafios ao seu armazenamento, tratamento e manipulação. Os tradicionais sistemas de armazenamento não se apresentam como a solução indicada para este problema. Estes desafios são alguns dos mais analisados e dissertados temas informáticos do momento. Várias tecnologias têm emergido com esta nova era, das quais se salienta um novo paradigma de armazenamento, o movimento NoSQL. Esta nova filosofia de armazenamento visa responder às necessidades de armazenamento e processamento destes volumosos e heterogéneos dados. Os armazéns de dados são um dos componentes mais importantes do âmbito Business Intelligence e são, maioritariamente, utilizados como uma ferramenta de apoio aos processos de tomada decisão, levados a cabo no dia-a-dia de uma organização. A sua componente histórica implica que grandes volumes de dados sejam armazenados, tratados e analisados tendo por base os seus repositórios. Algumas organizações começam a ter problemas para gerir e armazenar estes grandes volumes de informação. Esse facto deve-se, em grande parte, à estrutura de armazenamento que lhes serve de base. Os sistemas de gestão de bases de dados relacionais são, há algumas décadas, considerados como o método primordial de armazenamento de informação num armazém de dados. De facto, estes sistemas começam a não se mostrar capazes de armazenar e gerir os dados operacionais das organizações, sendo consequentemente cada vez menos recomendada a sua utilização em armazéns de dados. É intrinsecamente interessante o pensamento de que as bases de dados relacionais começam a perder a luta contra o volume de dados, numa altura em que um novo paradigma de armazenamento surge, exatamente com o intuito de dominar o grande volume inerente aos dados Big Data. Ainda é mais interessante o pensamento de que, possivelmente, estes novos sistemas NoSQL podem trazer vantagens para o mundo dos armazéns de dados. Assim, neste trabalho de mestrado, irá ser estudada a viabilidade e as implicações da adoção de bases de dados NoSQL, no contexto de armazéns de dados, em comparação com a abordagem tradicional, implementada sobre sistemas relacionais. Para alcançar esta tarefa, vários estudos foram operados tendo por base o sistema relacional SQL Server 2014 e os sistemas NoSQL, MongoDB e Cassandra. Várias etapas do processo de desenho e implementação de um armazém de dados foram comparadas entre os três sistemas, sendo que três armazéns de dados distintos foram criados tendo por base cada um dos sistemas. Toda a investigação realizada neste trabalho culmina no confronto da performance de consultas, realizadas nos três sistemas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An immunoprecipitation technique, ELIEDA (enzyme-linked-immuno-electro-diffusion assay), was evaluated for the diagnosis of Schistosoma mansoni infection with low worm burden. One hundred of serum samples from patients excreting less than 600 eggs per gram of feces (epg), with unrelated diseases and clinically healthy subjects were studied. In patients with egg counts higher than 200 epg, the sensitivities of IgM and IgG ELIEDA were 1.000 and 0.923, respectively, not differing from other Serologic techniques, such as indirect hemaglutination (IHAT), immunofluorescence (IFT) tests and immuno-electrodiffusion assay (IEDA). However in patients with low egg counts (< 100 epg), the IgG ELIEDA provided better results (0.821) than IgM ELIEDA (0.679), showing sensitivity that did not differ from that of IgG IFT (0.929), but lower than that of IgM IFT (0.964). However, its sensivity was higher than that found with IHAT (0.607) and IEDA (0.536). The specificity of IgG ELIEDA was comparable to that of other techniques. The data indicate that IgG ELIEDA might be useful for the diagnosis of slight S. mansoni infections, and the cellulose acetate membrane strips can be stored for further retrospective studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The currently used pre-exposure anti-rabies immunization schedule in Brazil is the one called 3+1, employing suckling mouse brain vaccine (3 doses on alternate days and the last one on day 30). Although satisfactory results were obtained in well controlled experimental groups using this immunization schedule, in our routine practice, VNA levels lower than 0.5 IU/ml are frequently found. We studied the pre-exposure 3+1 schedule under field conditions in different cities on the State of São Paulo, Brazil, under variable and sometimes adverse circumstances, such as the use of different batches of vaccine with different titers, delivered, stored and administered under local conditions. Fifty out of 256 serum samples (19.5%) showed VNA titers lower than 0.5 IU/ml, but they were not distributed homogeneously among the localities studied. While in some cities the results were completely satisfactory, in others almost 40% did not attain the minimum VNA titer required. The results presented here, considered separately, question our currently used procedures for human pre-exposure anti-rabies immunization. The reasons determining this situation are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMO: A síndrome de apneia hipopneia obstrutiva do sono (SAHOS), pela sua prevalência e consequências clínicas, nomeadamente as de natureza cardiovascular, é actualmente considerada um problema de saúde pública. A patogénese da doença cardiovascular na SAHOS não está ainda completamente estabelecida, mas parece ser multifactorial, envolvendo diversos mecanismos que incluem a hiperactividade do sistema nervoso simpático, a disfunção endotelial, a activação selectiva de vias inflamatórias, o stress oxidativo vascular e a disfunção metabólica. A terapêutica com CPAP diminui grandemente o risco de eventos cardiovasculares fatais e não fatais. O CPAP está inequivocamente indicado para o tratamento da SAHOS grave, no entanto, não é consensual a sua utilização nos doentes com SAHOS ligeira/moderada sem hipersonolência diurna associada. Tendo em conta este facto, é fundamental que as indicações terapêuticas do CPAP nestes doentes tenham uma relação custo-eficácia favorável. Assim, dado o posicionamento do estado da arte relativamente ao estudo da disfunção endotelial e da activação do sistema nervoso simpático estar centrada maioritariamente nos doentes com SAHOS grave, desenvolvemos este estudo com o objectivo de comparar os níveis plasmáticos de nitratos, os níveis de catecolaminas urinárias e os valores de pressão arterial nos doentes com SAHOS ligeira/moderada e grave e avaliar a resposta destes parâmetros ao tratamento com CPAP durante um mês. Realizámos um estudo prospectivo, incidindo sobre uma população de 67 doentes do sexo masculino com o diagnóstico de SAHOS (36 com SAHOS ligeira/moderada e 31com SAHOS grave). O protocolo consistia em 3 visitas: antes da terapêutica com CPAP (visita 1), uma semana após CPAP (visita 2) e um mês após CPAP (visita 3). Nas visitas 1 e 3, eram submetidos a três colheitas de sangue às 11 pm, 4 am e 7 am para doseamento dos nitratos plasmáticos e na visita 2 apenas às 7 am. Nas visitas 1 e 3 era também efectuada uma colheita de urina de 24 horas para o doseamento das catecolaminas urinárias e eram submetidos a uma monitorização ambulatória da pressão arterial de 24 horas (MAPA). Foi ainda estudado um grupo controlo de 30 indivíduos do sexo masculino não fumadores sem patologia conhecida e sem evidência de SAHOS. Antes da terapêutica com CPAP, verificou-se uma diminuição significativa dos níveis de nitratos ao longo da noite quer nos doentes com SAHOS ligeira/moderada, quer nos doentes com SAHOS grave. No entanto, esta redução diferia nos 2 grupos de doentes, sendo significativamente superior nos doentes com SAHOS grave (27,6±20,1% vs 16,5±18,5%; p<0,05). Após um mês de tratamento com CPAP, verificou-se um aumento significativo dos valores de nitratos plasmáticos apenas nos doentes com SAHOS grave, mantendo-se os níveis de nitratos elevados ao longo da noite, já não existindo o decréscimo desses valores ao longo da mesma. Os valores de noradrenalina basais eram significativamente superiores nos doentes com SAHOS grave comparativamente com os doentes com SAHOS ligeira/moderada (73,9±30,1μg/24h vs 48,5±19,91μg/24h; p<0,05). Após um mês de terapêutica com CPAP, apenas se verificou uma redução significativa nos valores da noradrenalina nos doentes com SAHOS grave (73,9±30,1μg/24h para 55,4±21,8 μg/24h; p<0,05). Os doentes com SAHOS grave apresentaram valores de pressão arterial mais elevados do que os doentes com SAHOS ligeira/moderada, nomeadamente no que diz respeito aos valores de pressão arterial média, sistólica média de 24 horas, diurna e nocturna e diastólica média de 24 horas, diurna e nocturna. Após um mês de terapêutica com CPAP, verificou-se uma redução significativa dos valores tensionais apenas nos doentescom SAHOS grave, para a pressão média (-2,32+5,0; p=0,005), para a sistólica média de 24 horas (-4,0+7,9mmHg; p=0,009), para a pressão sistólica diurna (-4,3+8,8mmHg; p=0,01), para a pressão sistólica nocturna (-5,1+9,0mmHg; p=0,005), para a pressão diastólica média de 24 horas (-2,7+5,8mmHg; p=0,016), para a pressão diastólica diurna (-3,2+6,3mmHg; p=0,009) e para a pressão diastólica nocturna (-2,5+7,0mmHg; p=0,04). Os níveis tensionais dos doentes com SAHOS grave após CPAP atingiram valores semelhantes aos dos doentes com SAHOS ligeira/moderada, relativamente a todos os parâmetros avaliados no MAPA. Este estudo demonstrou que antes do tratamento com CPAP, existe uma redução dos níveis de nitratos ao longo da noite não só nos doentes com SAHOS grave mas também nos doentes com SAHOS ligeira/moderada. No entanto, a terapêutica com CPAP leva a um aumento significativo dos valores de nitratos plasmáticos apenas nos doentes com SAHOS grave, mantendo-se os níveis de nitratos elevados ao longo da noite, já não existindo o decréscimo desses valores ao longo da mesma. O tratamento com CPAP durante um mês, apenas reduz os níveis de noradrenalina urinária e os valores de pressão arterial nos doentes com SAHOS grave.------------ ABSTRACT: In severe obstructive sleep apnea (OSA) reduced circulating nitrate, increased levels of urinary norepinephrine (U-NE) and changes in systemic blood pressure (BP) have been described and are reverted by Continuous Positive Airway Pressure (CPAP). However, the consequences of mild/moderate OSA on these parameters and the CPAP effect upon them are not well known. We aimed to: 1) compare the levels of plasma nitrate (NOx) and U-NE of mild/moderate and severe male OSA patients 2) compare BP in these patient groups; and 3) determine whether CPAP improves sympathetic dysfunction, nitrate deficiency and BP in these patients. This prospective study was carried out in 67 consecutive OSA patients (36 mild/moderate and 31 severe patients) and NOx (11 pm, 4 am, 7 am), 24-h U-NE and ambulatory blood pressure monitoring were obtained before and after 4 weeks of CPAP. Baseline: NOx levels showed a significant decrease (p<0.001) during the night in both groups of patients. The U-NE and BP were significantly higher in the severe group. Post CPAP: After one month of CPAP, there was a significant increase of NOx, a reduction of U-NE and BP only in severe patients. This study shows that in contrast to severe OSA patients, those with mild/moderate OSA, which have lower values of BP and U-NE at baseline, do not benefit from a 4 weeks CPAP treatment as measured by plasma nitrate, 24-h U-NE levels and BP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work tubular fiber reinforced specimens are tested for fatigue life. The specimens are biaxially loaded with tension and shear stresses, with a load angle β of 30° and 60° and a load ratio of R=0,1. There are many factors that affect fatigue life of a fiber reinforced material and the main goal of this work is to study the effects of load ratio R by obtaining S-N curves and compare them to the previous works (1). All the other parameters, such as specimen production, fatigue loading frequency and temperature, will be the same as for the previous tests. For every specimen, stiffness, temperature of the specimen during testing, crack counting and final fracture mode are obtained. Prior to testing, a study if the literature regarding the load ratio effects on composites fatigue life and with that review estimate the initial stresses to be applied in testing. In previous works (1) similar specimens have only been tested for a load ratio of R=-1 and therefore the behaviour of this tubular specimens for a different load ratio is unknown. All the data acquired will be analysed and compared to the previous works, emphasizing the differences found and discussing the possible explanations for those differences. The crack counting software, developed at the institute, has shown useful before, however different adjustments to the software parameters lead to different cracks numbers for the same picture, and therefore a better methodology will be discussed to improve the crack counting results. After the specimen’s failure, all the data will be collected and stored and fibre volume content for every specimen is also determinate. The number of tests required to make the S-N curves are obtained according to the existent standards. Additionally are also identified some improvements to the testing machine setup and to the procedures for future testing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The integration of growing amounts of distributed generation in power systems, namely at distribution networks level, has been fostered by energy policies in several countries around the world, including in Europe. This intensive integration of distributed, non-dispatchable, and natural sources based generation (including wind power) has caused several changes in the operation and planning of power systems and of electricity markets. Sometimes the available non-dispatchable generation is higher than the demand. This generation must be used; otherwise it is wasted if not stored or used to supply additional demand. New policies and market rules, as well as new players, are needed in order to competitively integrate all the resources. The methodology proposed in this paper aims at the maximization of the social welfare in a distribution network operated by a virtual power player that aggregates and manages the available energy resources. When facing a situation of excessive non-dispatchable generation, including wind power, real time pricing is applied in order to induce the increase of consumption so that wind curtailment is minimized. This method is especially useful when actual and day-ahead resources forecast differ significantly. The distribution network characteristics and concerns are addressed by including the network constraints in the optimization model. The proposed methodology has been implemented in GAMS optimization tool and its application is illustrated in this paper using a real 937-bus distribution network with 20.310 consumers and 548 distributed generators, some of them non-dispatchable and with must take contracts. The implemented scenario corresponds to a real day in Portuguese power system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of electricity markets operation has been gaining an increasing importance in the last years, as result of the new challenges that the restructuring process produced. Currently, lots of information concerning electricity markets is available, as market operators provide, after a period of confidentiality, data regarding market proposals and transactions. These data can be used as source of knowledge to define realistic scenarios, which are essential for understanding and forecast electricity markets behavior. The development of tools able to extract, transform, store and dynamically update data, is of great importance to go a step further into the comprehension of electricity markets and of the behaviour of the involved entities. In this paper an adaptable tool capable of downloading, parsing and storing data from market operators’ websites is presented, assuring constant updating and reliability of the stored data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of Electricity Markets operation has been gaining an increasing importance in the last years, as result of the new challenges that the restructuring produced. Currently, lots of information concerning Electricity Markets is available, as market operators provide, after a period of confidentiality, data regarding market proposals and transactions. These data can be used as source of knowledge, to define realistic scenarios, essential for understanding and forecast Electricity Markets behaviour. The development of tools able to extract, transform, store and dynamically update data, is of great importance to go a step further into the comprehension of Electricity Markets and the behaviour of the involved entities. In this paper we present an adaptable tool capable of downloading, parsing and storing data from market operators’ websites, assuring actualization and reliability of stored data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A hemagglutination (HA) test was standardized using formalin- and tannin-treated gander red blood cells sensitized with a total salt extract of C. cellulosae (HA-Cc) and an antigenic extract of Cysticercus longicollis (HA-Cl) vesicular fluid. A total of 61 cerebrospinal fluid (CSF) samples were assayed, 41 from patients with neurocysticercosis and 20 from a control group, which were, respectively, reactive and non-reactive to ELISA using C. cellulosae. The CSF samples from the control group did not react and 35 (85.4%) and 34 (82.9%) CSF samples from patients were reactive to the HA-Cc and HA-Cl tests, respectively. The reagents ready for use were stable up to 6 months when stored at 4°C in 50% glycerol. The present results confirm that the reagent using Cysticercus longicollis stabilized with glycerol can be used as an alternative in the immunological diagnosis of neurocysticercosis

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.