974 resultados para Multiple attenuation. Deconvolution. Seismic processing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and Technology

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation presented to obtain a Ph.D. degree in Engineering and Technology Sciences, Biotechnology at the Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta dissertação visa o estudo da influência da cultura organizacional no desempenho financeiro das organizações. Nesse contexto, procuramos analisar qual a cultura predominante das organizações, de forma a estabelecer posteriormente uma relação entre a cultura e o desempenho das empresas. Para isso a metodologia seguida foi a realização de um inquérito por questionário a empresas da região Douro de Portugal no sentido de obter, através de uma adaptação ao instrumento desenvolvido por Cameron e Quinn (2006), a cultura predominante da empresa, os indicadores financeiros necessários ao nosso estudo assim como, uma caracterização da amostra recolhida. Para análise e tratamento dos dados recolhidos através do inquérito por questionário foi utilizada a ferramenta estatística SPSS que nos permitiu retirar ilações sobre as características da amostra, assim como sobre a relação existente entre cultura organizacional e desempenho financeiro, esta relação foi avaliada através de testes de correlação e regressão linear múltipla. Os resultados sugerem que as variáveis culturais, cultura adocrática, mercado e hierárquica e o número de colaboradores explicam em cerca de 20% o resultado líquido ajustado. Também se verificou um efeito positivo da cultura adocrática e de mercado, embora o efeito da cultura de mercado seja mais forte que o da adocrática, e o efeito negativo da cultura hierárquica, ainda que estes resultados não sejam estatisticamente significativos. Não existem evidências que os tipos de cultura analisados (adocrática, de mercado e hierárquica) estão significativamente associados ao desempenho financeiro, avaliado pelos resultados líquidos ajustados, das empresas analisadas, quer pelos testes de correlação quer pelos resultados da estimação do modelo de regressão linear múltipla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we address an order processing optimization problem known as minimization of open stacks (MOSP). We present an integer pro gramming model, based on the existence of a perfect elimination scheme in interval graphs, which finds an optimal sequence for the costumers orders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A velocidade de difusão de conteúdos numa plataforma web, assume uma elevada relevância em serviços onde a informação se pretende atualizada e em tempo real. Este projeto de Mestrado, apresenta uma abordagem de um sistema distribuído de recolher e difundir resultados em tempo real entre várias plataformas, nomeadamente sistemas móveis. Neste contexto, tempo real entende-se como uma diferença de tempo nula entre a recolha e difusão, ignorando fatores que não podem ser controlados pelo sistema, como latência de comunicação e tempo de processamento. Este projeto tem como base uma arquitetura existente de processamento e publicação de resultados desportivos, que apresentava alguns problemas relacionados com escalabilidade, segurança, tempos de entrega de resultados longos e sem integração com outras plataformas. Ao longo deste trabalho procurou-se investigar fatores que condicionassem a escalabilidade de uma aplicação web dando ênfase à implementação de uma solução baseada em replicação e escalabilidade horizontal. Procurou-se também apresentar uma solução de interoperabilidade entre sistemas e plataformas heterogêneas, mantendo sempre elevados níveis de performance e promovendo a introdução de plataformas móveis no sistema. De várias abordagens existentes para comunicação em tempo real sobre uma plataforma web, adotou-se um implementação baseada em WebSocket que elimina o tempo desperdiçado entre a recolha de informação e sua difusão. Neste projeto é descrito o processo de implementação da API de recolha de dados (Collector), da biblioteca de comunicação com o Collector, da aplicação web (Publisher) e sua API, da biblioteca de comunicação com o Publisher e por fim a implementação da aplicação móvel multi-plataforma. Com os componentes criados, avaliaram-se os resultados obtidos com a nova arquitetura de forma a aferir a escalabilidade e performance da solução criada e sua adaptação ao sistema existente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accumulation of microcystin-LR (MC-LR) in edible aquatic organisms, particularly in bivalves, is widely documented. In this study, the effects of food storage and processing conditions on the free MC-LR concentration in clams (Corbicula fluminea) fed MC-LR-producing Microcystisaeruginosa (1 × 105 cell/mL) for four days, and the bioaccessibility of MC-LR after in vitro proteolytic digestion were investigated. The concentration of free MC-LR in clams decreased sequentially over the time with unrefrigerated and refrigerated storage and increased with freezing storage. Overall, cooking for short periods of time resulted in a significantly higher concentration (P < 0.05) of free MC-LR in clams, specifically microwave (MW) radiation treatment for 0.5 (57.5%) and 1 min (59%) and boiling treatment for 5 (163.4%) and 15 min (213.4%). The bioaccessibility of MC-LR after proteolytic digestion was reduced to 83%, potentially because of MC-LR degradation by pancreatic enzymes. Our results suggest that risk assessment based on direct comparison between MC-LR concentrations determined in raw food products and the tolerable daily intake (TDI) value set for the MC-LR might not be representative of true human exposure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some of the properties sought in seismic design of buildings are also considered fundamental to guarantee structural robustness. Moreover, some key concepts are common to both seismic and robustness design. In fact, both analyses consider events with a very small probability of occurrence, and consequently, a significant level of damage is admissible. As very rare events,in both cases, the actions are extremely hard to quantify. The acceptance of limited damage requires a system based analysis of structures, rather than an element by element methodology, as employed for other load cases. As for robustness analysis, in seismic design the main objective is to guarantee that the structure survives an earthquake, without extensive damage. In the case of seismic design, this is achieved by guaranteeing the dissipation of energy through plastic hinges distributed in the structure. For this to be possible, some key properties must be assured, in particular ductility and redundancy. The same properties could be fundamental in robustness design, as a structure can only sustain significant damage if capable of distributing stresses to parts of the structure unaffected by the triggering event. Timber is often used for primary load‐bearing elements in single storey long‐span structures for public buildings and arenas, where severe consequences can be expected if one or more of the primary load bearing elements fail. The structural system used for these structures consists of main frames, secondary elements and bracing elements. The main frame, composed by columns and beams, can be seen as key elements in the system and should be designed with high safety against failure and under strict quality control. The main frames may sometimes be designed with moment resisting joints between columns and beams. Scenarios, where one or more of these key elements, fail should be considered at least for high consequence buildings. Two alternative strategies may be applied: isolation of collapsing sections and, provision of alternate load paths [1]. The first one is relatively straightforward to provide by deliberately designing the secondary structural system less strong and stiff. Alternatively, the secondary structural system and the bracing system can be design so that loss of capacity in the main frame does not lead to the collapse. A case study has been selected aiming to assess the consequences of these two different strategies, in particular, under seismic loads.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PLOS ONE, 4(8):ARTe6820

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do Grau de Doutor em Ciências do Ambiente, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Salmonella é um microrganismo responsável por grande parte das doenças alimentares, podendo por em causa a saúde pública da área contaminada. Uma deteção rápida, eficiente e altamente sensível e extremamente importante, sendo um campo em franco desenvolvimento e alvo de variados e múltiplos estudos na comunidade cientifica atual. Foi desenvolvido um método potenciométrico para a deteção de Salmonellas, com elétrodos seletivos de iões, construídos em laboratório com pontas de micropipetas, fios de prata e sensores com composição otimizada. O elétrodo indicador escolhido foi um ESI seletivo a cadmio, para redução da probabilidade de interferências no método, devido a pouca abundancia do cadmio em amostras alimentares. Elétrodos seletivos a sódio, elétrodos de Ag/AgCl de simples e de dupla juncão foram também construídos e caracterizados para serem aplicados como elétrodos de referência. Adicionalmente otimizaram-se as condições operacionais para a analise potenciométrica, nomeadamente o elétrodo de referencia utilizado, condicionamento dos elétrodos, efeito do pH e volume da solução amostra. A capacidade de realizar leituras em volumes muito pequenos com limites de deteção na ordem dos micromolares por parte dos ESI de membrana polimérica, foi integrada num ensaio com um formato nao competitivo ELISA tipo sanduiche, utilizando um anticorpo primário ligado a nanopartículas de Fe@Au, permitindo a separação dos complexos anticorpo-antigénio formados dos restantes componentes em cada etapa do ensaio, pela simples aplicação de um campo magnético. O anticorpo secundário foi marcado com nanocristais de CdS, que são bastante estáveis e é fácil a transformação em Cd2+ livre, permitindo a leitura potenciométrica. Foram testadas várias concentrações de peroxido de hidrogénio e o efeito da luz para otimizar a dissolução de CdS. O método desenvolvido permitiu traçar curvas de calibração com soluções de Salmonellas incubadas em PBS (pH 4,4) em que o limite de deteção foi de 1100 CFU/mL e de 20 CFU/mL, utilizando volumes de amostra de 10 ƒÊL e 100 ƒÊL, respetivamente para o intervalo de linearidade de 10 a 108 CFU/mL. O método foi aplicado a uma amostra de leite bovino. A taxa de recuperação media obtida foi de 93,7% } 2,8 (media } desvio padrão), tendo em conta dois ensaios de recuperação efetuados (com duas replicas cada), utilizando um volume de amostra de 100 ƒÊL e concentrações de 100 e 1000 CFU/mL de Salmonella incubada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Matemática e Aplicações Especialização em Actuariado, Estatística e Investigação Operacional

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Nonius Software é uma empresa nacional de engenharia na área de telecomunicações, que se dedica ao desenvolvimento de soluções para a gestão de sistemas informáticos e de entretenimento, tendo como finalidade o mercado mundial hoteleiro e hospitalar. A solução de TV interactiva da Nonius oferece uma experiência única ao hóspede, ao disponibilizar várias opções de entretenimento e acesso a conteúdos de elevada qualidade e interesse. O hóspede tem acesso a canais de TV, aluguer de filmes, Internet, jogos, informações, promoções e compras na TV. O objectivo principal desta dissertação foi implementar alguns serviços de entretenimento numa televisão LG Pro: Centric. Este equipamento tem como principal vantagem o facto de conter a set-top-box inserida dentro da própria televisão. Em termos arquitectónicos, o sistema Nonius TV tem dois elementos fundamentais: o backend, responsável pelo processamento e tratamento da informação centralizada e o frontend instalado nos dispositivos com os quais o hóspede contacta directamente. Uma parte significativa do trabalho desenvolvido centrou-se na implementação de funcionalidades no backend. Foram, no entanto, também desenvolvidas algumas funcionalidades nos serviços de frontend. Para o cumprimento dos objectivos estabelecidos, foi utilizada a tecnologia FLASH, tendo como linguagem de programação a segunda versão do ActionScript. Relativamente ao desenvolvimento de backend são utilizados o PHP e o JavaScript.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A thirty three year-old, male patient was admitted at the Hospital of the São Paulo University School of Medicine, at the city of São Paulo, Brazil, with complaint of pains, tingling and decreased sensibility in the right hand for the last four months. This had progressed to the left hand, left foot and right foot, in addition to a difficulty of flexing and stretching in the left foot. Tests were positive for HBeAg, IgM anti-HBc and HBsAg, thus characterizing the condition of acute hepatitis B. The ALT serum level was 15 times above the upper normal limit. Blood glucose, cerebral spinal fluid, antinuclear antibodies (ANA) and anti-HIV and anti-HCV serum tests were either normal or negative. Electroneuromyography disclosed severe peripheral neuropathy with an axon prevalence and signs of denervation; nerve biopsy disclosed intense vasculitis. The diagnosis of multiple confluent mononeuropathy associated to acute hepatitis B was done. This association is not often reported in international literature and its probable cause is the direct action of the hepatitis B virus on the nerves or a vasculitis of the vasa nervorum brought about by deposits of immune complexes.