813 resultados para non-parametric technique
Resumo:
Introduction: La stomatite prothétique est une condition inflammatoire chronique de la muqueuse buccale recouverte par une prothèse. Cette maladie est considérée comme la lésion buccale la plus fréquente chez les porteurs de prothèses amovibles. Des études récentes sur l'étiologie de la stomatite prothétique suggèrent que des traitements basés sur la réduction de l'inflammation seraient efficaces dans le traitement de cette maladie. Objectifs: Évaluer l'efficacité du brossage du palais dans le traitement de la stomatite prothétique. Méthodes: Quarante-huit participants (âge moyen : 66,0 ± 11,2 ans) avec un diagnostic de stomatite prothétique, ont été sélectionnés à partir d’un examen préalable de 143 individus, afin de participer à cet essai clinique de phase I à deux centres, réalisé selon un devis de type pré-test/post-test à un seul groupe. L'intervention a consisté en un brossage du palais avec une brosse manuelle après chaque repas et avant le coucher. Des examens cliniques et microbiologiques ont été effectués avant le traitement, et à 1 mois et 3 mois de suivi. Des données supplémentaires ont été obtenues par l'utilisation d'un questionnaire validé. Les résultats primaires et secondaires étaient, respectivement, la rémission de stomatite prothétique et la diminution du nombre de colonies de Candida. Des tests statistiques descriptifs et non paramétriques ont été menés pour analyser les données. Résultats: À 3 mois de suivi, 10,4 % des participants ont été guéris et 70,8 % ont eu une amélioration clinique de la stomatite prothétique grâce au brossage du palais. Une réduction statistiquement significative de la surface et de l’intensité de l’inflammation après 3 mois de brossage du palais a été démontrée (p < 0,0001). L’ampleur de l’effet a varié d’un effet modéré à important (0,34 à 0,54) selon la classification utilisée pour le diagnostique de la stomatite prothétique. De plus, le nombre de colonies de Candida, recueillies par sonication des prothèses et par échantillonnage du palais, a diminué de manière statistiquement significative après 3 mois de brossage (p ≤ 0,05). Conclusion: Les résultats de cette étude suggèrent que le brossage du palais est efficace comme traitement de la stomatite prothétique.
Resumo:
Le cancer du sein est le cancer le plus fréquent chez la femme. Il demeure la cause de mortalité la plus importante chez les femmes âgées entre 35 et 55 ans. Au Canada, plus de 20 000 nouveaux cas sont diagnostiqués chaque année. Les études scientifiques démontrent que l'espérance de vie est étroitement liée à la précocité du diagnostic. Les moyens de diagnostic actuels comme la mammographie, l'échographie et la biopsie comportent certaines limitations. Par exemple, la mammographie permet de diagnostiquer la présence d’une masse suspecte dans le sein, mais ne peut en déterminer la nature (bénigne ou maligne). Les techniques d’imagerie complémentaires comme l'échographie ou l'imagerie par résonance magnétique (IRM) sont alors utilisées en complément, mais elles sont limitées quant à la sensibilité et la spécificité de leur diagnostic, principalement chez les jeunes femmes (< 50 ans) ou celles ayant un parenchyme dense. Par conséquent, nombreuses sont celles qui doivent subir une biopsie alors que leur lésions sont bénignes. Quelques voies de recherche sont privilégiées depuis peu pour réduire l`incertitude du diagnostic par imagerie ultrasonore. Dans ce contexte, l’élastographie dynamique est prometteuse. Cette technique est inspirée du geste médical de palpation et est basée sur la détermination de la rigidité des tissus, sachant que les lésions en général sont plus rigides que le tissu sain environnant. Le principe de cette technique est de générer des ondes de cisaillement et d'en étudier la propagation de ces ondes afin de remonter aux propriétés mécaniques du milieu via un problème inverse préétabli. Cette thèse vise le développement d'une nouvelle méthode d'élastographie dynamique pour le dépistage précoce des lésions mammaires. L'un des principaux problèmes des techniques d'élastographie dynamiques en utilisant la force de radiation est la forte atténuation des ondes de cisaillement. Après quelques longueurs d'onde de propagation, les amplitudes de déplacement diminuent considérablement et leur suivi devient difficile voir impossible. Ce problème affecte grandement la caractérisation des tissus biologiques. En outre, ces techniques ne donnent que l'information sur l'élasticité tandis que des études récentes montrent que certaines lésions bénignes ont les mêmes élasticités que des lésions malignes ce qui affecte la spécificité de ces techniques et motive la quantification de d'autres paramètres mécaniques (e.g.la viscosité). Le premier objectif de cette thèse consiste à optimiser la pression de radiation acoustique afin de rehausser l'amplitude des déplacements générés. Pour ce faire, un modèle analytique de prédiction de la fréquence de génération de la force de radiation a été développé. Une fois validé in vitro, ce modèle a servi pour la prédiction des fréquences optimales pour la génération de la force de radiation dans d'autres expérimentations in vitro et ex vivo sur des échantillons de tissu mammaire obtenus après mastectomie totale. Dans la continuité de ces travaux, un prototype de sonde ultrasonore conçu pour la génération d'un type spécifique d'ondes de cisaillement appelé ''onde de torsion'' a été développé. Le but est d'utiliser la force de radiation optimisée afin de générer des ondes de cisaillement adaptatives, et de monter leur utilité dans l'amélioration de l'amplitude des déplacements. Contrairement aux techniques élastographiques classiques, ce prototype permet la génération des ondes de cisaillement selon des parcours adaptatifs (e.g. circulaire, elliptique,…etc.) dépendamment de la forme de la lésion. L’optimisation des dépôts énergétiques induit une meilleure réponse mécanique du tissu et améliore le rapport signal sur bruit pour une meilleure quantification des paramètres viscoélastiques. Il est aussi question de consolider davantage les travaux de recherches antérieurs par un appui expérimental, et de prouver que ce type particulier d'onde de torsion peut mettre en résonance des structures. Ce phénomène de résonance des structures permet de rehausser davantage le contraste de déplacement entre les masses suspectes et le milieu environnant pour une meilleure détection. Enfin, dans le cadre de la quantification des paramètres viscoélastiques des tissus, la dernière étape consiste à développer un modèle inverse basé sur la propagation des ondes de cisaillement adaptatives pour l'estimation des paramètres viscoélastiques. L'estimation des paramètres viscoélastiques se fait via la résolution d'un problème inverse intégré dans un modèle numérique éléments finis. La robustesse de ce modèle a été étudiée afin de déterminer ces limites d'utilisation. Les résultats obtenus par ce modèle sont comparés à d'autres résultats (mêmes échantillons) obtenus par des méthodes de référence (e.g. Rheospectris) afin d'estimer la précision de la méthode développée. La quantification des paramètres mécaniques des lésions permet d'améliorer la sensibilité et la spécificité du diagnostic. La caractérisation tissulaire permet aussi une meilleure identification du type de lésion (malin ou bénin) ainsi que son évolution. Cette technique aide grandement les cliniciens dans le choix et la planification d'une prise en charge adaptée.
Resumo:
Mann–Kendall non-parametric test was employed for observational trend detection of monthly, seasonal and annual precipitation of five meteorological subdivisions of Central Northeast India (CNE India) for different 30-year normal periods (NP) viz. 1889–1918 (NP1), 1919–1948 (NP2), 1949–1978 (NP3) and 1979–2008 (NP4). The trends of maximum and minimum temperatures were also investigated. The slopes of the trend lines were determined using the method of least square linear fitting. An application of Morelet wavelet analysis was done with monthly rainfall during June– September, total rainfall during monsoon season and annual rainfall to know the periodicity and to test the significance of periodicity using the power spectrum method. The inferences figure out from the analyses will be helpful to the policy managers, planners and agricultural scientists to work out irrigation and water management options under various possible climatic eventualities for the region. The long-term (1889–2008) mean annual rainfall of CNE India is 1,195.1 mm with a standard deviation of 134.1 mm and coefficient of variation of 11%. There is a significant decreasing trend of 4.6 mm/year for Jharkhand and 3.2 mm/day for CNE India. Since rice crop is the important kharif crop (May– October) in this region, the decreasing trend of rainfall during themonth of July may delay/affect the transplanting/vegetative phase of the crop, and assured irrigation is very much needed to tackle the drought situation. During themonth of December, all the meteorological subdivisions except Jharkhand show a significant decreasing trend of rainfall during recent normal period NP4. The decrease of rainfall during December may hamper sowing of wheat, which is the important rabi crop (November–March) in most parts of this region. Maximum temperature shows significant rising trend of 0.008°C/year (at 0.01 level) during monsoon season and 0.014°C/year (at 0.01 level) during post-monsoon season during the period 1914– 2003. The annual maximum temperature also shows significant increasing trend of 0.008°C/year (at 0.01 level) during the same period. Minimum temperature shows significant rising trend of 0.012°C/year (at 0.01 level) during postmonsoon season and significant falling trend of 0.002°C/year (at 0.05 level) during monsoon season. A significant 4– 8 years peak periodicity band has been noticed during September over Western UP, and 30–34 years periodicity has been observed during July over Bihar subdivision. However, as far as CNE India is concerned, no significant periodicity has been noticed in any of the time series.
Resumo:
The work presented in the thesis is centered around two important types of cathode materials, the spinel structured LixMn204 (x =0.8to1.2) and the phospho -oIivine structured LiMP04 (M=Fe and Ni). The spinel system LixMn204, especially LiMn204 corresponding to x= 1 has been extensively investigated to understand its structural electrical and electrochemical properties and to analyse its suitability as a cathode material in rechargeable lithium batteries. However there is no reported work on the thermal and optical properties of this important cathode material. Thermal diffusivity is an important parameter as far as the operation of a rechargeable battery is concerned. In LixMn204, the electronic structure and phenomenon of Jahn-Teller distortion have already been established theoretically and experimentally. Part of the present work is an attempt to use the non-destructive technique (NDT) of photoacoustic spectroscopy to investigate the nature of the various electronic transitions and to unravel the mechanisms leading to the phenomenon of J.T distortion in LixMn204.The phospho-olivines LiMP04 (M=Fe, Ni, Mn, Co etc) are the newly identified, prospective cathode materials offering extremely high stability, quite high theoretical specific capacity, very good cycIability and long life. Inspite of all these advantages, most of the phospho - olivines especially LiFeP04 and LiNiP04 show poor electronic conductivity compared to LixMn204, leading to low rate capacity and energy density. In the present work attempts have been made to improve the electronic conductivity of LiFeP04 and LiNiP04 by adding different weight percentage MWNT .It is expected that the addition of MWNT will enhance the electronic conductivity of LiFeP04 and LiNiP04 with out causing any significant structural distortions, which is important in the working of the lithium ion battery.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
Deutsche Forschungsgemeinschaft
Resumo:
Enhancement of financial inclusivity of rural communities is often recognised as a key strategy for achieving economic development in third world countries. The main objective of this study was to examine the factors that influence consumers’ choice of a rural bank in Gicumbi district of Rwanda. Data was collected using structured questionnaires and analysed using a binary probit regression model and non-parametric procedures. Most consumers were aware of Popular Bank of Rwanda (BPR) and Umurenge SACCO through radio advertisements, social networks and community meetings. Accessibility, interest rates and quality of services influenced choice of a given financial intermediary. Moreover, the decision to open a rural bank account was significantly influenced by education and farm size (p<0.1). These results indicate the need for financial managers to consider these findings for successful marketing campaigns.
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Resumo:
Introducción: la enfermedad hepática grasa no alcohólica (NAFLD) es una enfermedad muy frecuente y de curso insidioso. El diagnostico, seguimiento y tratamiento de esta condición permanecen aun sin consenso debido principalmente a la falta de conocimiento de su historia natural y la dificultad de un diagnostico preciso de forma no invasiva. Materiales y Métodos: estudio prospectivo, observacional de corte transversal y correlación usando un muestreo no aleatorio de los pacientes que asistieron al servicio de chequeo médico de la Fundación CardioInfantil Instituto de Cardiología. Se evaluaron variables clínicas y para-clínicas como Índice de Masa Corporal, transaminasas, triglicéridos y apariencia ultrasonográfica del hígado. Se realizo análisis no paramétrico de varianza con la prueba de Kruskal-Wallis y análisis de correlación por medio del índice de correlación de Spearman. Resultados: se incluyeron 619 pacientes. Se encontró una variación estadísticamente significativa (p<0,001) entre todas las variables analizadas agrupadas de acuerdo a la apariencia ultrasonográfica del hígado. Finalmente, se encontraron coeficientes de correlación positivos y estadísticamente significativos (p<0,001) para las mismas variables. Discusión: la evaluación por ultrasonografía del hígado es una opción atractiva para el diagnostico y seguimiento de los pacientes con NAFLD debido a sus características no invasivas, bajo costo y amplia disponibilidad. Los resultados obtenidos sugieren que dada la variación de los parámetros clínicos de acuerdo con la apariencia hepática, esta herramienta puede ser útil tanto en fase de diagnostico como en fase de seguimiento para los pacientes de esta población. Los coeficientes de correlación sugieren que la posibilidad de predecir variables sanguíneas usando este método que debería estudiarse más a fondo. Conclusiones: en conjunto, los resultados de este estudio soportan la utilidad de la evaluación ultrasonográfica del hígado como herramienta de evaluación y posible seguimiento en pacientes con sospecha de NAFLD en esta población.
Resumo:
En Colombia el manejo de pacientes en los diferentes servicios hospitalarios han presentado inconvenientes en el combate de infecciones por bacterias y la resistencia de las mismas; el Staphylococcus aureus Escherichia coli y Pseudomonas aeruginosa, han demostrado evolución en la creación de generaciones resistentes y también han participado junto con la Salmonella y el B.cereus en brotes por ETAS como principales microorganismos causales. En el presente estudio se analizó el aceite esencial de la Conobea scoparioides para evaluar su actividad frente a cinco cepas bacterianas. Se obtuvo el AE y se prepararon las bacterias aplicándose pruebas de sensibilidad y no paramétricas para determinar el porcentaje de inhibición, la evaluación de la MIC y comparar la efectividad del aceite vs la estreptomicina. El aceite esencial presentó actividad principalmente contra el B. cereus con el mayor % de inhibición y una MIC de 3.2 ug/ml, caso diferente presentó P. aeruginosa con un % de crecimiento por encima del 50% presentando una MIC de 16.7 ug/ml. Finalmente podemos concluir que se presentó mayor actividad frente a bacterias gram positivas como el caso del B. cereus que en gram negativas con MIC bajas. Estos resultados permite comparar la actividad de la conobea con estudios recientes bajo la misma modalidad que permiten identificar nuevas plantas con actividad biológica y percibir que la conobea es efectiva en mayor proporción frente a bacterias gram positivas.
Resumo:
Este estudio fue conducido para evaluar la correlación entre lactato arterial y venoso central en niños con sepsis y choque séptico de una unidad de cuidado intensivo pediátrico. Se incluyeron 42 pacientes con edades comprendidas entre 1 mes y 17 años 364 días con diagnóstico de sepsis y choque séptico que ingresaron a la Unidad de Cuidado Intensivo en un hospital universitario de referencia. Se registró el valor del lactato obtenido de una muestra de sangre arterial y de sangre venosa central tomadas simultáneamente y dentro de las primeras 24 horas del ingreso a la unidad. Por medio de la prueba de Rho de Spearman se encontró una correlación de 0,872 (p<0,001) y se ajustó al uso de medicamentos, vasoactivos, edad y peso (modelo de regresión no paramétrico quantílico), manteniéndose una correlación fuerte y significativa.
Resumo:
El neurofeedback es una técnica no invasiva en la que se pretende corregir, mediante condicionamiento operante, ondas cerebrales que se encuentren alteradas en el electroencefalograma. Desde 1967, se han conducido numerosas investigaciones relacionadas con los efectos de la técnica en el tratamiento de alteraciones psicológicas. Sin embargo, a la fecha no existen revisiones sistemáticas que reúnan los temas que serán aquí tratados. El aporte de este trabajo es la revisión de 56 artículos, publicados entre los años 1995 y 2013 y la evaluación metodológica de 29 estudios incluidos en la revisión. La búsqueda fue acotada a la efectividad del neurofeedback en el tratamiento de depresión, ansiedad, trastorno obsesivo compulsivo (TOC), ira y fibromialgia. Los hallazgos demuestran que el neurofeedback ha tenido resultados positivos en el tratamiento de estos trastornos, sin embargo, es una técnica que aún está en desarrollo, con unas bases teóricas no muy bien establecidas y cuyos resultados necesitan de diseños metodológicamente más sólidos que ratifiquen su validez.
Resumo:
Se ha demostrado que la proteína GAPDH se puede unir al ADN telomérico de cadena sencilla, tanto in vitro como in vivo. Por lo tanto, se ha planteado la hipótesis de que la GAPDH juega un papel importante en la protección de los telómeros, papel que podría ser compartido con la TRF2, proteína que participa en una gran variedad de funciones relacionadas con la homeostasis telomérica. Objetivo: el objetivo de este estudio fue determinar si existe una correlación entre la expresión de ambos genes en el epitelio superficial del ovario in vitro. Materiales y métodos: la expresión relativa de cada gen fue establecida mediante qRT-PCR, en cultivos primarios de células del epitelio superficial del ovario provenientes de un grupo de 22 donantes colombianas mestizas sanas. Resultados: las pruebas no paramétricas de Kendall y Spearman permitieron establecer que existe una correlación significativa entre los niveles de expresión de GAPDH y TRF2 a lo largo de la historia replicativa de los cultivos, en forma independiente de la edad de las donantes. Conclusión: nuestros resultados sugieren un efecto sinérgico entre TRF2 y GAPDH, que podría estar orientado a contrarrestar la reducción de los telómeros in vitro.