4 resultados para Series de Dirichlet

em Universidad del Rosario, Colombia


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To evaluate the evolution of clinical and functional outcomes of symptomatic discoid lateral meniscus treated arthroscopically over time and to investigate the relationship between associated intra-articular findings and outcomes. Methods: Of all patients treated arthroscopically between 1995 and 2010, patients treated for symptomatic discoid meniscus were identified in the hospital charts. Baseline data (demographics, previous trauma of ipsilateral knee, and associated intra-articular findings) and medium term outcome data from clinical follow-up examinations (pain, locking, snapping and instability of the operated knee) were extracted from clinical records. Telephone interviews were conducted at long term in 28 patients (31 knees). Interviews comprised clinical outcomes as well as functional outcomes as assessed by the International Knee Documentation Committee Subjective Knee Evaluation Form (IKDC). Results: All patients underwent arthroscopic partial meniscectomy. The mean follow-up time for data extracted from clinical records was 11 months (SD ± 12). A significant improvement was found for pain in 77% (p<0.001), locking in 13%, (p=0.045) and snapping in 39 % (p<0.005). The mean follow-up time of the telephone interview was 60 months (SD ± 43). Improvement from baseline was generally less after five years than after one year and functional outcomes of the IKDC indicated an abnormal function after surgery (IKDC mean= 84.5, SD ± 20). In some patients, 5 year-outcomes were even worse than their preoperative condition. Nonetheless, 74% of patients perceived their knee function as improved. Furthermore, better results were seen in patients without any associated intra-articular findings. Conclusions: Arthroscopical partial meniscectomy is an effective intervention to relieve symptoms in patients with discoid meniscus in the medium-term; however, results trend to deteriorate over time. A trend towards better outcome for patients with no associated intra-articular findings was observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The objective of this study is to conduct a description of the features of optic neuropathy associated with Human Immunodeficiency Virus in relation to their possible incidence within our population, regarding that there is no data in our population in terms of frequency of this pathology (1,2). Methodology: Descriptive cross-sectional study of a clinical series of patients infected with human immunodeficiency virus, but AIDS, and the thickness of optic nerve´s layer of fibers studied with OCT technology (optical coherence tomography), patients were cited once captured. OCT was performed by the same observer, by taking 3 shots and picking the one with better reliability. Patients were given personally to the Ophthalmologic Foundation of Santander to conduct the review called OCT (optical coherence tomography). Results: In terms of viral load variable, we found a clear correlation in which validates the hypothesis that lower viral load means a thicker layer of fibers finding statistically significant differences for the 6 hours in right eye and 12 and 6 hours in left eye. Comparison between the known nomogram of fiber layer thickness for the population of Bucaramanga, Santander and thickness found in our sample, we note a clear decrease in the upper and lower quadrants, specifically in 7 hours and 11 hours, being more important in 7 hours, showing statistically significant differences. Conclusions: The pattern  of thinning of the nerve fiber layer in HIV positive patients without AIDS, and antiretroviral treatment type HAART, showed a statistically significant thinning targeted at 7 hours and 11 hours, being higher in first. Viral load figures have a direct relation with loss fiber layer, showing a statistically significant difference for the 6 and 12 hours.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En este artículo se argumenta que la persistencia no es una característica invariable de una serie de tiempo, sino que depende del contexto en el cual la serie se utiliza: dado que los parámetros de cualquier modelo dinámico se definen en relación a un conjunto particular de información, cualquier cambio en el conjunto de variables condicionales puede afectar las estimaciones resultantes. Definimos persistencia de una variable como la tasa a la cual su función de autocorrelación converge a cero, y demostramos que la inferencia sobre la persistencia de una variable no varía en función de la adición de otras variables condicionales siempre y cuando éstas variables no sean Granger-causales sobre la variable de interés. Más aún, establecemos que la persistencia medida es una función del modelo elegido y que esto es más fundamental para sistemas inestables. Nuestros hallazgos sugieren que, a menos que se impongan más restricciones derivadas de la teoría económica, temas como la efectividad de las políticas de estabilización no pueden ser resueltos empíricamente, y que por ende, el debate entre los teóricos keynesianos y RBC no puede cerrarse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El conocimiento de la distribución de probabilidad de los retornos de la tasa de cambio y la medición de las área extremas son tópicos en la literatura de finanzas que han sido analizados por procedimientos de estimación paramétricos y no paramétricos. Sin embargo, un conflicto de robustez surge debido a que estas series de tiempo son leptocúrticas. Más aún, se ha observado que en varias economías en desarrollo la fase inicial del régimen flexible de tasa de cambio ha presentado volatilidad alta. En esta investigación se cubren dos objetivos: primero, parametrizar varias clases de distribuciones que permitan tener una nueva descripción del proceso generador de la tasa de cambio durante el régimen flexible. Segundo, cuantificar el área extrema a través del estimador de Hill. Está estrategia requiere que el número de observaciones extremas sea conocido. Así basado en la teoría de estadísticas de orden se implementa una regla de decisión encontrada por simulación de Monte Carlo bajo varias distribuciones. El modelo de decisión es formulado de tal manera que el error cuadrado medio es minimizado.