454 resultados para Coma


Relevância:

10.00% 10.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, specifically an electronic circuit called Cognitive Sensorimotor Loop (CSL) and its control algorithm based on VHDL hardware description language. The reason that makes the CSL special resides in its ability to operate a motor both as a sensor and an actuator. This way, it is possible to achieve a balanced position in any of the robot joints (e.g. the robot manages to stand) without needing any conventional sensor. In other words, the back electromotive force (EMF) induced by the motor coils is measured and the control algorithm responds depending on its magnitude. The CSL circuit contains mainly an analog-to-digital converter (ADC) and a driver. The ADC consists on a delta-sigma modulation which generates a series of bits with a certain percentage of 1's and 0's, proportional to the back EMF. The control algorithm, running in a FPGA, processes the bit frame and outputs a signal for the driver. This driver, which has an H bridge topology, gives the motor the ability to rotate in both directions while it's supplied with the power needed. The objective of this thesis is to document the experiments and overall work done on push ignoring contractive sensorimotor algorithms, meaning sensorimotor algorithms that ignore large magnitude forces (compared to gravity) applied in a short time interval on a pendulum system. This main objective is divided in two sub-objectives: (1) developing a system based on parameterized thresholds and (2) developing a system based on a push bypassing filter. System (1) contains a module that outputs a signal which blocks the main Sensorimotor algorithm when a push is detected. This module has several different parameters as inputs e.g. the back EMF increment to consider a force as a push or the time interval between samples. System (2) consists on a low-pass Infinite Impulse Response digital filter. It cuts any frequency considered faster than a certain push oscillation. This filter required an intensive study on how to implement some functions and data types (fixed or floating point data) not supported by standard VHDL packages. Once this was achieved, the next challenge was to simplify the solution as much as possible, without using non-official user made packages. Both systems behaved with a series of interesting advantages and disadvantages for the elaboration of the document. Stability, reaction time, simplicity or computational load are one of the many factors to be studied in the designed systems. RESUMEN. Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL es un Proyecto de Fin de Grado (PFG) que concluye mis estudios en la Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación de la Universidad Politécnica de Madrid. En él se documenta el trabajo de investigación que realicé en el Neurorobotics Research Laboratory de la Beuth Hochschule für Technik Berlin durante el año 2015 mediante el programa de intercambio ERASMUS. Este PFG se centra en el campo de la robótica y en concreto en un circuito electrónico llamado Cognitive Sensorimotor Loop (CSL) y su algoritmo de control basado en lenguaje de modelado hardware VHDL. La particularidad del CSL reside en que se consigue que un motor haga las veces tanto de sensor como de actuador. De esta manera es posible que las articulaciones de un robot alcancen una posición de equilibrio (p.ej. el robot se coloca erguido) sin la necesidad de sensores en el sentido estricto de la palabra. Es decir, se mide la propia fuerza electromotriz (FEM) inducida sobre el motor y el algoritmo responde de acuerdo a su magnitud. El circuito CSL se compone de un convertidor analógico-digital (ADC) y un driver. El ADC consiste en un modulador sigma-delta, que genera una serie de bits con un porcentaje de 1's y 0's determinado, en proporción a la magnitud de la FEM inducida. El algoritmo de control, que se ejecuta en una FPGA, procesa esta cadena de bits y genera una señal para el driver. El driver, que posee una topología en puente H, provee al motor de la potencia necesaria y le otorga la capacidad de rotar en cualquiera de las dos direcciones. El objetivo de este PFG es documentar los experimentos y en general el trabajo realizado en algoritmos Sensorimotor que puedan ignorar fuerzas de gran magnitud (en comparación con la gravedad) y aplicadas en una corta ventana de tiempo. En otras palabras, ignorar empujones conservando el comportamiento original frente a la gravedad. Para ello se han desarrollado dos sistemas: uno basado en umbrales parametrizados (1) y otro basado en un filtro de corte ajustable (2). El sistema (1) contiene un módulo que, en el caso de detectar un empujón, genera una señal que bloquea el algoritmo Sensorimotor. Este módulo recibe diferentes parámetros como el incremento necesario de la FEM para que se considere un empujón o la ventana de tiempo para que se considere la existencia de un empujón. El sistema (2) consiste en un filtro digital paso-bajo de respuesta infinita que corta cualquier variación que considere un empujón. Para crear este filtro se requirió un estudio sobre como implementar ciertas funciones y tipos de datos (coma fija o flotante) no soportados por las librerías básicas de VHDL. Tras esto, el objetivo fue simplificar al máximo la solución del problema, sin utilizar paquetes de librerías añadidos. En ambos sistemas aparecen una serie de ventajas e inconvenientes de interés para el documento. La estabilidad, el tiempo de reacción, la simplicidad o la carga computacional son algunas de las muchos factores a estudiar en los sistemas diseñados. Para concluir, también han sido documentadas algunas incorporaciones a los sistemas: una interfaz visual en VGA, un módulo que compensa el offset del ADC o la implementación de una batería de faders MIDI entre otras.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta dissertação de Mestrado tem como objetivo desvelar a prática pedagógica de educadoras leigas atuantes na educação não-formal e a práxis educacional que elas desempenham em prol da alfabetização letrada. Ao abordarmos os três temas que regem este estudo educação não-formal, formação de educadoras leigas e alfabetização letrada versamos sobre os resultados obtidos em espaços de educação não-formais, estabelecendo um contraponto: até quando o não-formal pode ser considerado um espaço de educação aquém da escola? Foi realizada uma pesquisa empírica no Projeto Sementinha da cidade de Santo André/ SP, através da observação da práxis educativa de sete educadoras, de entrevistas, coma exsecretária de Educação que implantou o Projeto na cidade; com os Coordenadores Gerais do Projeto Sementinha; com quatro das sete educadoras e ainda, um levantamento bibliográfico sobre o surgimento do Projeto na cidade de Curvelo; MG. Até o momento, as análises realizadas apontaram para o desenvolvimento de práticas de letramento em diferentes momentos da atuação das educadoras e na formação cidadã e moral das crianças assistidas pelo Projeto na comunidade observada para a efetivação deste trabalho de pesquisa.(AU)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta dissertação de Mestrado tem como objetivo desvelar a prática pedagógica de educadoras leigas atuantes na educação não-formal e a práxis educacional que elas desempenham em prol da alfabetização letrada. Ao abordarmos os três temas que regem este estudo educação não-formal, formação de educadoras leigas e alfabetização letrada versamos sobre os resultados obtidos em espaços de educação não-formais, estabelecendo um contraponto: até quando o não-formal pode ser considerado um espaço de educação aquém da escola? Foi realizada uma pesquisa empírica no Projeto Sementinha da cidade de Santo André/ SP, através da observação da práxis educativa de sete educadoras, de entrevistas, coma exsecretária de Educação que implantou o Projeto na cidade; com os Coordenadores Gerais do Projeto Sementinha; com quatro das sete educadoras e ainda, um levantamento bibliográfico sobre o surgimento do Projeto na cidade de Curvelo; MG. Até o momento, as análises realizadas apontaram para o desenvolvimento de práticas de letramento em diferentes momentos da atuação das educadoras e na formação cidadã e moral das crianças assistidas pelo Projeto na comunidade observada para a efetivação deste trabalho de pesquisa.(AU)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rap phosphatases are a recently discovered family of protein aspartate phosphatases that dephosphorylate the Spo0F--P intermediate of the phosphorelay, thus preventing sporulation of Bacillus subtilis. They are regulators induced by physiological processes that are antithetical to sporulation. The RapA phosphatase is induced by the ComP-ComA two-component signal transduction system responsible for initiating competence. RapA phosphatase activity was found to be controlled by a small protein, PhrA, encoded on the same transcript as RapA. PhrA resembles secreted proteins and the evidence suggests that it is cleaved by signal peptidase I and a 19-residue C-terminal domain is secreted from the cell. The sporulation deficiency caused by the uncontrolled RapA activity of a phrA mutant can be complemented by synthetic peptides comprising the last six or more of the C-terminal residues of PhrA. Whether the peptide controls RapA activity directly or by regulating its synthesis remains to be determined. Complementation of the phrA mutant can also be obtained in mixed cultures with a wild-type strain, suggesting the peptide may serve as a means of communication between cells. Importation of the secreted peptide required the oligopeptide transport system. The sporulation deficiency of oligopeptide transport mutants can be suppressed by mutating the rapA and rapB genes or by introduction of a spo0F mutation Y13S that renders the protein insensitive to Rap phosphatases. The data indicate that the sporulation deficiency of oligopeptide transport mutants is due to their inability to import the peptides controlling Rap phosphatases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Competence for genetic transformation in Streptococcus pneumoniae has been known for three decades to arise in growing cultures at a critical cell density, in response to a secreted protease-sensitive signal. We show that strain CP1200 produces a 17-residue peptide that induces cells of the species to develop competence. The sequence of the peptide was found to be H-Glu-Met-Arg-Leu-Ser-Lys-Phe-Phe-Arg-Asp-Phe-Ile-Leu-Gln-Arg- Lys-Lys-OH. A synthetic peptide of the same sequence was shown to be biologically active in small quantities and to extend the range of conditions suitable for development of competence. Cognate codons in the pneumococcal chromosome indicate that the peptide is made ribosomally. As the gene encodes a prepeptide containing the Gly-Gly consensus processing site found in peptide bacteriocins, the peptide is likely to be exported by a specialized ATP-binding cassette transport protein as is characteristic of these bacteriocins. The hypothesis is presented that this transport protein is encoded by comA, previously shown to be required for elaboration of the pneumococcal competence activator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El Principio de Precaución surge como respuesta a la crisis ecológica. Diagnostica un mundo poblado de incertidumbres y en el que no se puede confiar en las tradicionales técnicas de gestión de riesgos. De esa situación dan cuenta las obras de Jonas y Luhmann. En un principio, sus propuestas son contradictorias, Jonas concibe la crisis ecológica como crisis moral y es partidario de atajarla siguiendo un Principio de Responsabilidad que nos sensibilice ante los eventuales efectos catastróficos de las intervenciones humanas sobre el entorno natural. Luhmann, por el contrario, concibe la crisis coma una crisis sistemática que responde a las complejas relaciones de alcance evolutivo entre el sistema sociotécnico y el entorno natural. Esa crisis no puede ser administrada recurriendo a la moral, sino sólo generando espacios de entendimiento abiertos al aprendizaje y la variación. A pesar de estas diferencias, Jonas y Luhmann muestran una insospechada coincidencia estrátegica; los dos se enfrentan a un mundo poblado de incertidumbres abogando a favor de su reconducción retórica, el primero apostando por la retórica del miedo; el segundo, por la retórica del entendimiento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose To evaluate visual, optical, and quality of life (QoL) outcomes and intercorrelations after bilateral implantation of posterior chamber phakic intraocular lenses. Methods Twenty eyes with high to moderate myopia of 10 patients that underwent PRL implantation (Phakic Refractive Lens, Carl Zeiss Meditec AG) were examined. Refraction, visual acuity, photopic and low mesopic contrast sensitivity (CS) with and without glare, ocular aberrations, as well as QoL outcomes (National Eye Institute Refractive Error Quality of Life Instrument-42, NEI RQL-42) were evaluated at 12 months postoperatively. Results Significant improvement in uncorrected (UDVA) and best-corrected distance (CDVA) visual acuities were found postoperatively (p < 0.01), with significant reduction in spherical equivalent (p < 0.01). Low mesopic CS without glare was significantly better than measurements with glare for 1.5, 3, and 6 cycles/degree (p < 0.01). No significant correlations between higher order root mean square (RMS) with CDVA (r = −0.26, p = 0.27) and CS (r ≤ 0.45, p ≥ 0.05) were found. Postoperative binocular photopic CS for 12 cycles/degree and 18 cycles/degree correlated significantly with several RQL-42 scales. Glare index correlated significantly with CS measures and scotopic pupil size (r = −0.551, p = 0.04), but not with higher order RMS (r = −0.02, p = 0.94). Postoperative higher order RMS, postoperative primary coma and postoperative spherical aberration was significant higher for 5-mm pupil diameter (p < 0.01) compared with controls. Conclusions Correction of moderate to high myopia by means of PRL implantation had a positive impact on CS and QoL. The aberrometric increase induced by the surgery does not seem to limit CS and QoL. However, perception of glare is still a relevant disturbance in some cases possibly related to the limitation of the optical zone of the PRL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background To evaluate and report the visual, refractive, and aberrometric outcomes of LASIK for the correction of low to moderate hyperopia in a pilot group using a commercially available solid-state laser. Methods Prospective pilot study including 11 consecutive eyes with low to moderate hyperopia of six patients undergoing LASIK surgery using the Pulzar Z1 solid-state laser (CustomVis Laser Pty Ltd., currently CV Laser). Visual, refractive, and aberrometric changes were evaluated. Potential complications were evaluated as well. Mean follow-up time was 6.6 months (range, 3 to 11 months). Results A significant improvement in LogMAR uncorrected distance visual acuity (UDVA) was observed postoperatively (p = 0.01). No significant change was detected in LogMAR corrected distance visual acuity (CDVA) (p = 0.21). Postoperative LogMAR UDVA was 0.1 (about 20/25) or better in ten eyes (90.9 %). Mean overall efficacy and safety indices were 1.03 and 1.12. Postoperatively, no losses of lines of CDVA were observed. Postoperative spherical equivalent was within ±1.00 D in ten eyes (90.9 %). With regard to aberrations, no statistically significant changes were found in higher order and primary coma RMS postoperatively (p ≥ 0.21), and only minimal but statistically significant negativization of primary spherical aberration (p = 0.02) was observed. No severe complications were observed. Conclusion LASIK surgery using the solid-state laser technology seems to be a useful procedure for the correction of low to moderate hyperopia, with minimal induction of higher order aberrations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: To evaluate in a pilot study the visual, refractive, corneal topographic, and aberrometric changes after wavefront-guided LASIK or photorefractive keratectomy (PRK) using a high-resolution aberrometer to calculate the treatment for aberrated eyes. METHODS: Twenty aberrated eyes of 18 patients undergoing wavefront-guided LASIK or PRK using the VISX STARS4IR excimer laser and the iDesign aberrometer (Abbott Medical Optics, Inc., Santa Ana, CA) were enrolled in this prospective study. Three groups were differentiated: keratoconus post-CXL group including 11 keratoconic eyes (10 patients), post-LASIK group including 5 eyes (5 patients) with previous decentered LASIK treatments, and post-RK group including 4 eyes (3 patients) with previous radial keratotomy. Visual, refractive, contrast sensitivity, corneal topographic, and ocular aberrometric changes were evaluated during a 6-month follow-up. RESULTS: An improvement in uncorrected (UDVA) and corrected visual acuity (CDVA) associated with a reduction in the spherical equivalent was observed in the three groups, but was only statistically significant in the keratoconus post-CXL and post-LASIK groups (P ≤ .04). All eyes gained one or more lines of CDVA after surgery. Improvements in contrast sensitivity were observed in the three groups, but they were only statistically significant in the keratoconus post-CXL and post-LASIK groups (P ≤ .04). Regarding aberrations, a reduction was observed in trefoil aberrations in the keratoconus post-CXL group (P = .05) and significant reductions in higher-order and primary coma aberrations in the post-LASIK group (P = .04). CONCLUSIONS: Wavefront-guided laser enhancements using the evaluated platform seem to be safe and effective to restore the visual function in aberrated eyes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introdução: O hipopituitarismo é caracterizado por insuficiência da secreção hormonal hipófisária. A clínica é variável e depende da etiologia, evolução temporal e hormonas envolvi- das. Caso: Criança do sexo masculino com 2 anos, trazida à urgência por alteração súbita da consciência. No período neonatal apresentou quadro de hipoglicemia, trombocitopenia, icterícia e sépsis sem agente identificado. Objetivou-se crescimento regular no P10-25, desenvolvimento psicomotor adequado e estrabismo divergente. Ao exame objectivo apresentava-se subfebril e com Escala de Coma de Glasgow 10. Foi constatada hipoglicemia grave (26mg/dL) sendo realizado de imediato estudo endocrinológico e metabólico que mostrou cortisol baixo e défices de ACTH e GH; posteriormente foi confirmado défice de TSH. Iniciou terapêutica de substituição com hidrocortisona e levotiroxina. A neuroimagem mostrou alterações estruturais com hipoplasia da neuro-hipófise. Conclusão: Este diagnóstico raro exige elevado grau de suspeição. A ocorrência progressiva dos défices hormonais obriga à avaliação clínica e laboratorial regular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final do Curso de Mestrado Integrado em Medicina, Faculdade de Medicina, Universidade de Lisboa, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório do Programa Nacional de Diagnóstico Precoce relativo ao ano de 2009. O Programa Nacional de Diagnóstico Precoce tem como objectivo primário, o rastreio neonatal de doenças cujo tratamento precoce permita evitar nas crianças rastreadas, atraso mental, situações de coma e alterações neurológicas ou metabólicas graves e definitivas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório do Programa Nacional de Diagnóstico Precoce relativo ao ano de 2008. O Programa Nacional de Diagnóstico Precoce tem como objectivo primário, o rastreio neonatal de doenças cujo tratamento precoce permita evitar nas crianças rastreadas, atraso mental, situações de coma e alterações neurológicas ou metabólicas graves e definitivas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório do Programa Nacional de Diagnóstico Precoce relativo ao ano de 2010. O Programa Nacional de Diagnóstico Precoce tem como objectivo primário, o rastreio neonatal de doenças cujo tratamento precoce permita evitar nas crianças rastreadas, atraso mental, situações de coma e alterações neurológicas ou metabólicas graves e definitivas.