857 resultados para non-parametric technique


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The linearity assumption in the structural dynamics analysis is a severe practical limitation. Further, in the investigation of mechanisms presented in fighter aircrafts, as for instance aeroelastic nonlinearity, friction or gaps in wing-load-payload mounting interfaces, is mandatory to use a nonlinear analysis technique. Among different approaches that can be used to this matter, the Volterra theory is an interesting strategy, since it is a generalization of the linear convolution. It represents the response of a nonlinear system as a sum of linear and nonlinear components. Thus, this paper aims to use the discrete-time version of Volterra series expanded with Kautz filters to characterize the nonlinear dynamics of a F-16 aircraft. To illustrate the approach, it is identified and characterized a non-parametric model using the data obtained during a ground vibration test performed in a F-16 wing-to-payload mounting interfaces. Several amplitude inputs applied in two shakers are used to show softening nonlinearities presented in the acceleration data. The results obtained in the analysis have shown the capability of the Volterra series to give some insight about the nonlinear dynamics of the F-16 mounting interfaces. The biggest advantage of this approach is to separate the linear and nonlinear contributions through the multiple convolutions through the Volterra kernels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To compare the cyclosporine 0.05 % exposure effect on fibroblasts from primary and recurrent pterygium. Primary culture of fibroblasts from primary and recurrent pterygium was performed until the third passage, which was exposed to cyclosporine 0.05 % in a group and the other remaining unexposed (control group), in triplicates. After 3, 6, 12, and 17 days of exposure the viable cell counting was performed by hemocytometer. The results were statistically analyzed using the technique of analysis of non-parametric variance model for repeated measures with three factors. There was a significant reduction in both fibroblast proliferation, in primary as in the recurrent pterygium cultures exposed to cyclosporine when compared not exposed cultures, with statistical significance (P < 0.05). Comparing primary and recurrent pterygium that received the drug, there was no significant difference in cell proliferation in relation to primary or recurrent pterygium. Cyclosporine 0.05 % is effective in inhibiting fibroblast proliferation in culture, both in primary and as in recurrent pterygium.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to compare time-motion indicators during judo matches performed by athletes from different age groups. The following age groups were analysed: Pre-Juvenile (13-14 years, n=522), Juvenile (15-16 years, n 353); Junior (19 years, n = 349) and Senior (>20 years, n = 587). The time-motion indicators included: Total Combat Time, Standing Combat Time, Displacement Without Contact, Gripping Time, Groundwork Combat Time and Pause Time. Analysis of variance (ANOVA) one-way and the Tukey test, as well as the Kruskal-Wallis test and Mann-Whitney (for non-parametric data), were conducted, using P < 0.05 as significance level. The results showed that all analysed groups obtained a median of 7 (first quantile - 3, third quantile - 12) sequences of combat/pause cycles. In total time of combat, the result was: for Total Combat Time, Standing Combat Time and Gripping Time: Pre-Juvenile and Senior were significantly longer than Juvenile and Junior. Considering Displacement Without Contact, Junior was significantly longer than all other age groups. For Groundwork Combat Time, Senior was significantly longer than all other age groups and Pre-Juvenile was longer than Junior. These results can be used to improve the physiological performance in intermittent practices, as well as technicaltactical training during judo sessions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Peri-implantitis is common in patients with dental implants. We performed a single-blinded longitudinal randomized study to assess the effects of mechanical debridement on the peri-implant microbiota in peri-implantitis lesions. MATERIALS AND METHODS: An expanded checkerboard DNA-DNA hybridization assay encompassing 79 different microorganisms was used to study bacterial counts before and during 6 months following mechanical treatment of peri-implantitis in 17 cases treated with curettes and 14 cases treated with an ultrasonic device. Statistics included non-parametric tests and GLM multivariate analysis with p<0001 indicating significance and 80% power. RESULTS: At selected implant test sites, the most prevalent bacteria were: Fusobacterium nucleatum sp., Staphylococci sp., Aggregatibacter actinomycetemcomitans, Helicobacter pylori, and Tannerella forsythia. 30 min. after treatment with curettes, A. actinomycetemcomitans (serotype a), Lactobacillus acidophilus, Streptococcus anginosus, and Veillonella parvula were found at lower counts (p<0.001). No such differences were found for implants treated with the ultrasonic device. Inconsistent changes occurred following the first week. No microbiological differences between baseline and 6-month samples were found for any species or between treatment study methods in peri-implantitis. CONCLUSIONS: Both methods failed to eliminate or reduce bacterial counts in peri-implantitis. No group differences were found in the ability to reduce the microbiota in peri-implantitis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Carotid endarterectomy (CEA) reduces the risk of stroke in patients with symptomatic (>50%) and asymptomatic (>60%) carotid artery stenosis. Here we report the midterm results of a microsurgical non-patch technique and compare these findings to those in the literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND/AIM: Parallel investigation, in a matched case-control study, of the association of different first-trimester markers with the risk of subsequent pre-eclampsia (PE). METHOD: The levels of different first trimester serum markers and fetal nuchal translucency thickness were compared between 52 cases of PE and 104 control women by non-parametric two-group comparisons and by calculating matched odds ratios. RESULTS: In univariable analysis increased concentrations of inhibin A and activin A were associated with subsequent PE (p < 0.02). Multivariable conditional logistic regression models revealed an association between increased risk of PE and increased inhibin A and translucency thickness and respectively reduced pregnancy-associated plasma protein A (PAPP-A) and placental lactogen . However, these associations varied with the gestational age at sample collection. For blood samples taken in pregnancy weeks 12 and 13 only, increased levels of activin A, inhibin A and nuchal translucency thickness, and lower levels of placenta growth factor and PAPP-A were associated with an increased risk of PE. CONCLUSIONS: Members of the inhibin family and to some extent PAPP-A and placental growth factor are superior to other serum markers, and the predictive value of these depends on the gestational age at blood sampling. The availability of a single, early pregnancy 'miracle' serum marker for PE risk assessment seems unlikely in the near future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PRINCIPLES: Cardiogoniometry is a non-invasive technique for quantitative three-dimensional vectorial analysis of myocardial depolarization and repolarization. We describe a method of surface electrophysiological cardiac assessment using cardiogoniometry performed at rest to detect variables helpful in identifying coronary artery disease. METHODS: Cardiogoniometry was performed in 793 patients prior to diagnostic coronary angiography. Using 13 variables in men and 10 in women, values from 461 patients were retrospectively analyzed to obtain a diagnostic score that would identify patients having coronary artery disease. This score was then prospectively validated on 332 patients. RESULTS: Cardiogoniometry showed a prospective diagnostic sensitivity of 64%, and a specificity of 82%. ECG diagnostic sensitivity was significantly lower, with 53% and a similar specificity of 75%. CONCLUSIONS: Cardiogoniometry is a new, noninvasive, quantitative electrodiagnostic technique which is helpful in identifying patients with coronary artery disease. It can easily be performed at rest and delivers an accurate, automated diagnostic score.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Cerebral revascularization may be indicated either for blood flow preservation or flow augmentation, often in clinical situations where neither endovascular nor standard surgical intervention can be performed. Cerebral revascularization can be performed by using a temporary occlusive or a non-occlusive technique. Both of these possibilities have their specific range of feasibility. Therefore non-occlusive revascularization techniques have been developed. To further reduce the risks for patients, less time consuming, sutureless techniques such as laser tissue soldering are currently being investigated. METHOD: In the present study, a new technique for side-to-side anastomosis was developed. Using a "sandwich technique", two vessels are kept in close contact during the laser soldering. Thoraco-abdominal aortas from 24 different rabbits were analyzed for laser irradiation induced tensile strength. Two different irradiation modes (continuous and pulsed) were used. The results were compared to conventional, noncontact laser soldering. Histology was performed using HE, Mason's Trichrome staining. FINDINGS: The achieved tensile strengths were significantly higher using the close contact "sandwich technique" as compared to the conventional adaptation technique. Furthermore, tensile strength was higher in the continuously irradiated specimen as compared to the specimen undergoing pulsed laser irradiation. The histology showed similar denaturation areas in both groups. The addition of a collagen membrane between vessel components reduced the tensile strength. CONCLUSION: These first results proved the importance of close and tight contact during the laser soldering procedure thus enabling the development of a "sandwich laser irradiation device" for in vivo application in the rabbit.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

INTRODUCTION: The simple bedside method for sampling undiluted distal pulmonary edema fluid through a normal suction catheter (s-Cath) has been experimentally and clinically validated. However, there are no data comparing non-bronchoscopic bronchoalveolar lavage (mini-BAL) and s-Cath for assessing lung inflammation in acute hypoxaemic respiratory failure. We designed a prospective study in two groups of patients, those with acute lung injury (ALI)/acute respiratory distress syndrome (ARDS) and those with acute cardiogenic lung edema (ACLE), designed to investigate the clinical feasibility of these techniques and to evaluate inflammation in both groups using undiluted sampling obtained by s-Cath. To test the interchangeability of the two methods in the same patient for studying the inflammation response, we further compared mini-BAL and s-Cath for agreement of protein concentration and percentage of polymorphonuclear cells (PMNs). METHODS: Mini-BAL and s-Cath sampling was assessed in 30 mechanically ventilated patients, 21 with ALI/ARDS and 9 with ACLE. To analyse agreement between the two sampling techniques, we considered only simultaneously collected mini-BAL and s-Cath paired samples. The protein concentration and polymorphonuclear cell (PMN) count comparisons were performed using undiluted sampling. Bland-Altman plots were used for assessing the mean bias and the limits of agreement between the two sampling techniques; comparison between groups was performed by using the non-parametric Mann-Whitney-U test; continuous variables were compared by using the Student t-test, Wilcoxon signed rank test, analysis of variance or Student-Newman-Keuls test; and categorical variables were compared by using chi-square analysis or Fisher exact test. RESULTS: Using protein content and PMN percentage as parameters, we identified substantial variations between the two sampling techniques. When the protein concentration in the lung was high, the s-Cath was a more sensitive method; by contrast, as inflammation increased, both methods provided similar estimates of neutrophil percentages in the lung. The patients with ACLE showed an increased PMN count, suggesting that hydrostatic lung edema can be associated with a concomitant inflammatory process. CONCLUSIONS: There are significant differences between the s-Cath and mini-BAL sampling techniques, indicating that these procedures cannot be used interchangeably for studying the lung inflammatory response in patients with acute hypoxaemic lung injury.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Peri-implantitis is a frequent finding in patients with dental implants. The present study compared two non-surgical mechanical debridement methods of peri-implantitis. MATERIAL AND METHODS: Thirty-seven subjects (mean age 61.5; S.D+/-12.4), with one implant each, demonstrating peri-implantitis were randomized, and those treated either with titanium hand-instruments or with an ultrasonic device were enrolled. Data were obtained before treatment, and at 1, 3, and 6 months. Parametric and non-parametric statistics were used. RESULTS: Thirty-one subjects completed the study. The mean bone loss at implants in both groups was 1.5 mm (SD +/-1.2 mm). No group differences for plaque or gingival indices were found at any time point. Baseline and 6-month mean probing pocket depths (PPD) at implants were 5.1 and 4.9 mm (p=0.30) in both groups. Plaque scores at treated implants decreased from 73% to 53% (p<0.01). Bleeding scores also decreased (p<0.01), with no group differences. No differences in the total bacterial counts were found over time. Higher total bacterial counts were found immediately after treatment (p<0.01) and at 1 week for ultrasonic-treated implants (p<0.05). CONCLUSIONS: No group differences were found in the treatment outcomes. While plaque and bleeding scores improved, no effects on PPD were identified.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. METHODS Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. RESULTS There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D<0.17 mm), as expected, followed by AC and BZ superimpositions that presented similar level of accuracy (D<0.5 mm). 3P and 1Z were the least accurate superimpositions (0.79technique (p>0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. CONCLUSIONS Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.