887 resultados para methods of analysis
Resumo:
Pós-graduação em Medicina Veterinária - FCAV
Resumo:
Exercise physiology has attempted to reproduce the experimental exercise in the laboratory using mainly rats. The swimming exercise has emerged as one of the leading research in these type ergometers. Thus, this research consisted of a literature review addressing the key issues which involve the exercise of swimming in the model rats. Training of aerobic and anaerobic swimming, evaluation models and models of periodization were the topics suggested in this research. In several studies, models of aerobic and anaerobic training have been proposed with the aim of studying their effect on normal and abnormal physiological parameters. However, earlier studies lacked methods of analysis aiming to determine the exercise intensity in the animal model. For this reason, in the last decade, assessment models have been adapted for humans to animals, especially rats. The maximal lactate steady state (MLSS) and lactate minimum (LM) are among the various techniques used to measure the amount of effort produced by swimming exercise in rats. Thereafter, based on biochemical parameters such as lactate, swimming exercise in rats has become the highest-rated, ie, using as reference the anaerobic threshold (AT). In another aspect, an entirely new line of research has tried to understand and promising swimming training in a periodized and its effects on some biochemical parameters. But this is an area little researched so far. Thus, the experimental model of swimming has proved an important resource of exercise physiology. From this model, it becomes possible to study the exercise, especially swimming, in more accurate, based on invasive and incisive analysis of the rat
Resumo:
Pós-graduação em Genética e Melhoramento Animal - FCAV
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.
Resumo:
Il presente lavoro ha lo scopo di presentare gli studi e i risultati ottenuti durante l’attività di ricerca svolta sul Displacement-based Assessment (DBA) dei telai in cemento armato. Dopo alcune considerazioni iniziali sul tema della vulnerabilità sismica e sui metodi di analisi e verifica, si procede alla descrizione teorica del metodo. Sono stati analizzati tre casi studio di telai piani, progettati per soli carichi verticali e secondo normative non più in vigore che non prevedevano l’applicazione della gerarchia delle resistenze. I telai considerati, destinati ad abitazione civile, hanno diversa altezza e numero di piani, e diverso numero di campate. Si è proceduto all’applicazione del metodo, alla valutazione della vulnerabilità sismica in base alla domanda in termini di spostamento costituita da uno spettro elastico previsto dall’EC8 e alla validazione dei risultati ottenuti mediante analisi non lineari statiche e dinamiche e mediante l’applicazione dei teoremi dell’Analisi limite dei telai, proposta come procedura alternativa per la determinazione del meccanismo anelastico e della capacità in termini di taglio alla base. In ultimo si è applicata la procedura DBA per la valutazione della vulnerabilità sismica di un edificio scolastico, realizzato tra il 1969 e il 1975 in un sito caratterizzato da una accelerazione di picco orizzontale pari a 0,24g e una probabilità di superamento del 10% in 75 anni.
Resumo:
The ability to evaluate effects of factors on outcomes is increasingly important for a class of studies that control some but not all of the factors. Although important advances have been made in methods of analysis for such partially controlled studies,work on designs for such studies has been relatively limited. To help understand why, we review main designs that have been used for such partially controlled studies. Based on the review, we give two complementary reasons that explain the limited work on such designs, and suggest a new direction in this area.
Resumo:
C-Reactive Protein (CRP) is a biomarker indicating tissue damage, inflammation, and infection. High-sensitivity CRP (hsCRP) is an emerging biomarker often used to estimate an individual’s risk for future coronary heart disease (CHD). hsCRP levels falling below 1.00 mg/l indicate a low risk for developing CHD, levels ranging between 1.00 mg/l and 3.00 mg/l indicate an elevated risk, and levels exceeding 3.00 mg/l indicate high risk. Multiple Genome-Wide Association Studies (GWAS) have identified a number of genetic polymorphisms which influence CRP levels. SNPs implicated in such studies have been found in or near genes of interest including: CRP, APOE, APOC, IL-6, HNF1A, LEPR, and GCKR. A strong positive correlation has also been found to exist between CRP levels and BMI, a known risk factor for CHD and a state of chronic inflammation. We conducted a series of analyses designed to identify loci which interact with BMI to influence CRP levels in a subsample of European-Americans in the ARIC cohort. In a stratified GWA analysis, 15 genetic regions were identified as having significantly (p-value < 2.00*10-3) distinct effects on hsCRP levels between the two obesity strata: lean (18.50 kg/m2 < BMI < 24.99 kg/m2) and obese (BMI ≥ 30.00 kg/m2). A GWA analysis performed on all individuals combined (i.e. not a priori stratified for obesity status) with the inclusion of an additional parameter for BMI by gene interaction, identified 11 regions which interact with BMI to influence hsCRP levels. Two regions containing the genes GJA5 and GJA8 (on chromosome 1) and FBXO11 (on chromosome 2) were identified in both methods of analysis suggesting that these genes possibly interact with BMI to influence hsCRP levels. We speculate that atrial fibrillation (AF), age-related cataracts and the TGF-β pathway may be the biological processes influenced by the interaction of GJA5, GJA8 and FBXO11, respectively, with BMI to cause changes in hsCRP levels. Future studies should focus on the influence of gene x bmi interaction on AF, age-related cataracts and TGF-β.
Resumo:
This year marks the 20th anniversary of functional near-infrared spectroscopy and imaging (fNIRS/fNIRI). As the vast majority of commercial instruments developed until now are based on continuous wave technology, the aim of this publication is to review the current state of instrumentation and methodology of continuous wave fNIRI. For this purpose we provide an overview of the commercially available instruments and address instrumental aspects such as light sources, detectors and sensor arrangements. Methodological aspects, algorithms to calculate the concentrations of oxy- and deoxyhemoglobin and approaches for data analysis are also reviewed. From the single-location measurements of the early years, instrumentation has progressed to imaging initially in two dimensions (topography) and then three (tomography). The methods of analysis have also changed tremendously, from the simple modified Beer-Lambert law to sophisticated image reconstruction and data analysis methods used today. Due to these advances, fNIRI has become a modality that is widely used in neuroscience research and several manufacturers provide commercial instrumentation. It seems likely that fNIRI will become a clinical tool in the foreseeable future, which will enable diagnosis in single subjects.
Resumo:
A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^
Resumo:
Con el objetivo de determinar el grupo de publicaciones nucleares a considerar en el desarrollo de la colección de la Biblioteca del IAR, se realiza un estudio bibliométrico de la producción y del consumo de literatura científica de los investigadores de la institución a la que la biblioteca pertenece. A partir del análisis de referencias de los trabajos publicados por los investigadores se determinan la obsolescencia y la utilidad de la literatura consultada. Mediante la extracción de palabras clave y de los autores se determinan también los frentes de investigación del instituto y los grupos de investigadores que trabajan en esos frentes, aplicando los métodos de análisis de co-ocurrencia de palabras, coautorías y análisis de redes sociales. Los resultados dan cuenta de una baja obsolescencia para la literatura consultada, de una elevada preferencia para consultar y publicar en dos o tres títulos de publicaciones periódicas de la disciplina, y demuestran finalmente la existencia de dos frentes de investigación dentro de la institución
Resumo:
A technique of zooplankton net sampling at night in the Kandalaksha and Dvinskii Bays and during the full tide in the Onezhskii Bay of the White Sea allowed us to obtain "clean" samples without considerable admixtures of terrigenous particulates. Absence of elements-indicators of the terrigenous particulates (Al, Ti, and Zr) in the EDX spectra allows to conclude that ash composition of tested samples is defined by constitutional elements comprising organic matter and integument (chitin, shells) of plankton organisms. A quantitative assessment of accumulation of ca. 40 chemical elements by zooplankton based on a complex of modern physical methods of analysis is presented. Values of the coefficient of the biological accumulation of the elements (Kb) calculated for organic matter and the enrichment factors (EF) relative to Clarke concentrations in shale are in general determined by mobility of the chemical elements in aqueous solution, which is confirmed by calculated chemical speciation of the elements in the inorganic subsystem of surface waters of Onezhskii Bay.
Resumo:
Con el objetivo de determinar el grupo de publicaciones nucleares a considerar en el desarrollo de la colección de la Biblioteca del IAR, se realiza un estudio bibliométrico de la producción y del consumo de literatura científica de los investigadores de la institución a la que la biblioteca pertenece. A partir del análisis de referencias de los trabajos publicados por los investigadores se determinan la obsolescencia y la utilidad de la literatura consultada. Mediante la extracción de palabras clave y de los autores se determinan también los frentes de investigación del instituto y los grupos de investigadores que trabajan en esos frentes, aplicando los métodos de análisis de co-ocurrencia de palabras, coautorías y análisis de redes sociales. Los resultados dan cuenta de una baja obsolescencia para la literatura consultada, de una elevada preferencia para consultar y publicar en dos o tres títulos de publicaciones periódicas de la disciplina, y demuestran finalmente la existencia de dos frentes de investigación dentro de la institución
Resumo:
Con el objetivo de determinar el grupo de publicaciones nucleares a considerar en el desarrollo de la colección de la Biblioteca del IAR, se realiza un estudio bibliométrico de la producción y del consumo de literatura científica de los investigadores de la institución a la que la biblioteca pertenece. A partir del análisis de referencias de los trabajos publicados por los investigadores se determinan la obsolescencia y la utilidad de la literatura consultada. Mediante la extracción de palabras clave y de los autores se determinan también los frentes de investigación del instituto y los grupos de investigadores que trabajan en esos frentes, aplicando los métodos de análisis de co-ocurrencia de palabras, coautorías y análisis de redes sociales. Los resultados dan cuenta de una baja obsolescencia para la literatura consultada, de una elevada preferencia para consultar y publicar en dos o tres títulos de publicaciones periódicas de la disciplina, y demuestran finalmente la existencia de dos frentes de investigación dentro de la institución
Resumo:
La gran cantidad de datos que se registran diariamente en los sistemas de base de datos de las organizaciones ha generado la necesidad de analizarla. Sin embargo, se enfrentan a la complejidad de procesar enormes volúmenes de datos a través de métodos tradicionales de análisis. Además, dentro de un contexto globalizado y competitivo las organizaciones se mantienen en la búsqueda constante de mejorar sus procesos, para lo cual requieren herramientas que les permitan tomar mejores decisiones. Esto implica estar mejor informado y conocer su historia digital para describir sus procesos y poder anticipar (predecir) eventos no previstos. Estos nuevos requerimientos de análisis de datos ha motivado el desarrollo creciente de proyectos de minería de datos. El proceso de minería de datos busca obtener desde un conjunto masivo de datos, modelos que permitan describir los datos o predecir nuevas instancias en el conjunto. Implica etapas de: preparación de los datos, procesamiento parcial o totalmente automatizado para identificar modelos en los datos, para luego obtener como salida patrones, relaciones o reglas. Esta salida debe significar un nuevo conocimiento para la organización, útil y comprensible para los usuarios finales, y que pueda ser integrado a los procesos para apoyar la toma de decisiones. Sin embargo, la mayor dificultad es justamente lograr que el analista de datos, que interviene en todo este proceso, pueda identificar modelos lo cual es una tarea compleja y muchas veces requiere de la experiencia, no sólo del analista de datos, sino que también del experto en el dominio del problema. Una forma de apoyar el análisis de datos, modelos y patrones es a través de su representación visual, utilizando las capacidades de percepción visual del ser humano, la cual puede detectar patrones con mayor facilidad. Bajo este enfoque, la visualización ha sido utilizada en minería datos, mayormente en el análisis descriptivo de los datos (entrada) y en la presentación de los patrones (salida), dejando limitado este paradigma para el análisis de modelos. El presente documento describe el desarrollo de la Tesis Doctoral denominada “Nuevos Esquemas de Visualizaciones para Mejorar la Comprensibilidad de Modelos de Data Mining”. Esta investigación busca aportar con un enfoque de visualización para apoyar la comprensión de modelos minería de datos, para esto propone la metáfora de modelos visualmente aumentados. ABSTRACT The large amount of data to be recorded daily in the systems database of organizations has generated the need to analyze it. However, faced with the complexity of processing huge volumes of data over traditional methods of analysis. Moreover, in a globalized and competitive environment organizations are kept constantly looking to improve their processes, which require tools that allow them to make better decisions. This involves being bettered informed and knows your digital story to describe its processes and to anticipate (predict) unanticipated events. These new requirements of data analysis, has led to the increasing development of data-mining projects. The data-mining process seeks to obtain from a massive data set, models to describe the data or predict new instances in the set. It involves steps of data preparation, partially or fully automated processing to identify patterns in the data, and then get output patterns, relationships or rules. This output must mean new knowledge for the organization, useful and understandable for end users, and can be integrated into the process to support decision-making. However, the biggest challenge is just getting the data analyst involved in this process, which can identify models is complex and often requires experience not only of the data analyst, but also the expert in the problem domain. One way to support the analysis of the data, models and patterns, is through its visual representation, i.e., using the capabilities of human visual perception, which can detect patterns easily in any context. Under this approach, the visualization has been used in data mining, mostly in exploratory data analysis (input) and the presentation of the patterns (output), leaving limited this paradigm for analyzing models. This document describes the development of the doctoral thesis entitled "New Visualizations Schemes to Improve Understandability of Data-Mining Models". This research aims to provide a visualization approach to support understanding of data mining models for this proposed metaphor visually enhanced models.
Resumo:
El incremento de la esperanza de vida en los países desarrollados (más de 80 años en 2013), está suponiendo un crecimiento considerable en la incidencia y prevalencia de enfermedades discapacitantes, que si bien pueden aparecer a edades tempranas, son más frecuentes en la tercera edad, o en sus inmediaciones. Enfermedades neuro-degenerativas que suponen un gran hándicap funcional, pues algunas de ellas están asociadas a movimientos involuntarios de determinadas partes del cuerpo, sobre todo de las extremidades. Tareas cotidianas como la ingesta de alimento, vestirse, escribir, interactuar con el ordenador, etc… pueden llegar a ser grandes retos para las personas que las padecen. El diagnóstico precoz y certero resulta fundamental para la prescripción de la terapia o tratamiento óptimo. Teniendo en cuenta incluso que en muchos casos, por desgracia la mayoría, sólo se puede actuar para mitigar los síntomas, y no para sanarlos, al menos de momento. Aun así, acertar de manera temprana en el diagnóstico supone proporcionar al enfermo una mayor calidad de vida durante mucho más tiempo, por lo cual el esfuerzo merece, y mucho, la pena. Los enfermos de Párkinson y de temblor esencial suponen un porcentaje importante de la casuística clínica en los trastornos del movimiento que impiden llevar una vida normal, que producen una discapacidad física y una no menos importante exclusión social. Las vías de tratamiento son dispares de ahí que sea crítico acertar en el diagnóstico lo antes posible. Hasta la actualidad, los profesionales y expertos en medicina, utilizan unas escalas cualitativas para diferenciar la patología y su grado de afectación. Dichas escalas también se utilizan para efectuar un seguimiento clínico y registrar la historia del paciente. En esta tesis se propone una serie de métodos de análisis y de identificación/clasificación de los tipos de temblor asociados a la enfermedad de Párkinson y el temblor esencial. Empleando técnicas de inteligencia artificial basadas en clasificadores inteligentes: redes neuronales (MLP y LVQ) y máquinas de soporte vectorial (SVM), a partir del desarrollo e implantación de un sistema para la medida y análisis objetiva del temblor: DIMETER. Dicho sistema además de ser una herramienta eficaz para la ayuda al diagnóstico, presenta también las capacidades necesarias para proporcionar un seguimiento riguroso y fiable de la evolución de cada paciente. ABSTRACT The increase in life expectancy in developed countries in more than 80 years (data belongs to 2013), is assuming considerable growth in the incidence and prevalence of disabling diseases. Although they may appear at an early age, they are more common in the elderly ages or in its vicinity. Nuero-degenerative diseases that are a major functional handicap, as some of them are associated with involuntary movements of certain body parts, especially of the limbs. Everyday tasks such as food intake, dressing, writing, interact with the computer, etc ... can become large debris for people who suffer. Early and accurate diagnosis is crucial for prescribing optimal therapy or treatment. Even taking into account that in many cases, unfortunately the majority, can only act to mitigate the symptoms, not to cure them, at least for now. Nevertheless, early diagnosis may provide the patient a better quality of life for much longer time, so the effort is worth, and much, grief. Sufferers of Parkinson's and essential tremor represent a significant percentage of clinical casuistry in movement disorders that prevent a normal life, leading to physical disability and not least social exclusion. There are various treatment methods, which makes it necessary the immediate diagnosis. Up to date, professionals and medical experts, use a qualitative scale to differentiate the disease and degree of involvement. Therefore, those scales are used in clinical follow-up. In this thesis, several methods of analysis and identification / classification of types of tremor associated with Parkinson's disease and essential tremor are proposed. Using artificial intelligence techniques based on intelligent classification: neural networks (MLP and LVQ) and support vector machines (SVM), starting from the development and implementation of a system for measuring and objective analysis of the tremor: DIMETER. This system besides being an effective tool to aid diagnosis, it also has the necessary capabilities to provide a rigorous and reliable monitoring of the evolution of each patient.