890 resultados para Probability Density-function


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive explicit lower and upper bounds for the probability generating functional of a stationary locally stable Gibbs point process, which can be applied to summary statistics such as the F function. For pairwise interaction processes we obtain further estimates for the G and K functions, the intensity, and higher-order correlation functions. The proof of the main result is based on Stein's method for Poisson point process approximation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Epidemiological, clinical, and experimental evidence has accumulated during the last decades suggesting that high-density lipoproteins (HDLs) may protect from atherosclerosis and its clinical consequences. However, more than 55 years after the first description of the link between HDL and heart attacks, many facets of the biochemistry, function, and clinical significance of HDL remain enigmatic. This applies particularly to the completely unexpected results that became available from some recent clinical trials of nicotinic acid and of inhibitors of cholesteryl ester transfer protein (CETP). The concept that raising HDL cholesterol by pharmacological means would decrease the risk of vascular disease has therefore been challenged.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soft X-ray lasing across a Ni-like plasma gain-medium requires optimum electron temperature and density for attaining to the Ni-like ion stage and for population inversion in the View the MathML source3d94d1(J=0)→3d94p1(J=1) laser transition. Various scaling laws, function of operating parameters, were compared with respect to their predictions for optimum temperatures and densities. It is shown that the widely adopted local thermodynamic equilibrium (LTE) model underestimates the optimum plasma-lasing conditions. On the other hand, non-LTE models, especially when complemented with dielectronic recombination, provided accurate prediction of the optimum plasma-lasing conditions. It is further shown that, for targets with Z equal or greater than the rare-earth elements (e.g. Sm), the optimum electron density for plasma-lasing is not accessible for pump-pulses at View the MathML sourceλ=1ω=1μm. This observation explains a fundamental difficulty in saturating the wavelength of plasma-based X-ray lasers below 6.8 nm, unless using 2ω2ω pumping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The upconversion quantum yield (UCQY) is one of the most significant parameters for upconverter materials. A high UCQY is essential for a succesful integration of upconversion in many applications, such as harvesting of the solar radiation. However, little is known about which doping level of the rare-earth ions yields the highest UCQY in the different host lattices and what are the underlying causes. Here, we investigate which Er3+ doping yields the highest UCQY in the host lattices β-NaYF4 and Gd2O2S under 4I15/2 → 4I13/2 excitation. We show for both host lattices that the optimum Er3+ doping is not fixed and it actually decreases as the irradiance of the excitation increases. To find the optimum Er3+ doping for a given irradiance, we determined the peak position of the internal UCQY as a function of the average Er−Er distance. For this purpose, we used a fit on experimental data, where the average Er−Er distance was calculated from the Er3+ doping of the upconverter samples and the lattice parameters of the host materials. We observe optimum average Er−Er distances for the host lattices β-NaYF4 and Gd2O2S with differences <14% at the same irradiance levels, whereas the optimum Er3+ doping are around 2× higher for β-NaYF4 than for Gd2O2S. Estimations by extrapolation to higher irradiances indicate that the optimum average Er−Er distance converges to values around 0.88 and 0.83 nm for β-NaYF4 and Gd2O2S, respectively. Our findings point to a fundamental relationship and focusing on the average distance between the active rare-earth ions might be a very efficient way to optimize the doping of rare-earth ions with regard to the highest achievable UCQY.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since European settlement, there has been a dramatic increase in the density, cover and distribution of woody plants in former grassland and open woodland. There is a widespread belief that shrub encroachment is synonymous with declines in ecosystem functions, and often it is associated with landscape degradation or desertification. Indeed, this decline in ecosystem functioning is considered to be driven largely by the presence of the shrubs themselves. This prevailing paradigm has been the basis for an extensive program of shrub removal, based on the view that it is necessary to reinstate the original open woodland or grassland structure from which shrublands are thought to have been derived. We review existing scientific evidence, particularly focussed on eastern Australia, to question the notion that shrub encroachment leads to declines in ecosystem functions. We then summarise this scientific evidence into two conceptual models aimed at optimising landscape management to maximise the services provided by shrub-encroached areas. The first model seeks to reconcile the apparent conflicts between the patch- and landscape-level effects of shrubs. The second model identifies the ecosystem services derived from different stages of shrub encroachment. We also examined six ecosystem services provided by shrublands (biodiversity, soil C, hydrology, nutrient provision, grass growth and soil fertility) by using published and unpublished data. We demonstrated the following: (1) shrub effects on ecosystems are strongly scale-, species- and environment-dependent and, therefore, no standardised management should be applied to every case; (2) overgrazing dampens the generally positive effect of shrubs, leading to the misleading relationship between encroachment and degradation; (3) woody encroachment per se does not hinder any of the functions or services described above, rather it enhances many of them; (4) no single shrub-encroachment state (including grasslands without shrubs) will maximise all services; rather, the provision of ecosystem goods and services by shrublands requires a mixture of different states; and (5) there has been little rigorous assessment of the long-term effectiveness of removal and no evidence that this improves land condition in most cases. Our review provides the basis for an improved, scientifically based understanding and management of shrublands, so as to balance the competing goals of providing functional habitats, maintaining soil processes and sustaining pastoral livelihoods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chronic aerobic exercise has been shown to increase exercise efficiency, thus allowing less energy expenditure for a similar amount of work. The extent to which skeletal muscle mitochondria play a role in this is not fully understood, particularly in an elderly population. The purpose of this study was to determine the relationship of exercise efficiency with mitochondrial content and function. We hypothesized that the greater the mitochondrial content and/or function, the greater would be the efficiencies. Thirty-eight sedentary (S, n = 23, 10F/13M) or athletic (A, n = 15, 6F/9M) older adults (66.8 ± 0.8 years) participated in this cross sectional study. V˙O2peak was measured with a cycle ergometer graded exercise protocol (GXT). Gross efficiency (GE, %) and net efficiency (NE, %) were estimated during a 1-h submaximal test (55% V˙O2peak). Delta efficiency (DE, %) was calculated from the GXT. Mitochondrial function was measured as ATPmax (mmol/L/s) during a PCr recovery protocol with (31)P-MR spectroscopy. Muscle biopsies were acquired for determination of mitochondrial volume density (MitoVd, %). Efficiencies were 17% (GE), 14% (NE), and 16% (DE) higher in A than S. MitoVD was 29% higher in A and ATPmax was 24% higher in A than in S. All efficiencies positively correlated with both ATPmax and MitoVd. Chronically trained older individuals had greater mitochondrial content and function, as well as greater exercise efficiencies. GE, NE, and DE were related to both mitochondrial content and function. This suggests a possible role of mitochondria in improving exercise efficiency in elderly athletic populations and allowing conservation of energy at moderate workloads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and cumulative equivalent minutes suggest that thermal elevation of this minimally invasive cochlear implantation surgery may pose a risk to the facial nerve, especially in sclerotic or high density mastoid bones. Optimized drilling parameters need to be evaluated and the model could be used for future risk evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although negative density dependence (NDD) can facilitate tree species coexistence in forests, the underlying mechanisms can differ, and rarely are the dynamics of seedlings and saplings studied together. Herein we present and discuss a novel mechanism based on our investigation of NDD predictions for the large, grove-forming ectomycorrhizal mast fruiting tree, Microberlinia bisulcata (Caesalpiniaceae), in an 82.5-ha plot at Korup, Cameroon. We tested whether juvenile density, size, growth and survival decreases with increasing conspecific adult basal area for 3245 ‘new’ seedlings and 540 ‘old’ seedlings (< 75-cm tall) during an approximately 4-year study period (2008–2012) and for 234 ‘saplings’ (≥ 75-cm tall) during an approximately 6-year study period (2008–2014). We found that the respective densities of new seedlings, old seedlings and saplings were positively, not and negatively related to increasing BA. Maximum leaf numbers and heights of old seedlings were negatively correlated with increasing basal areas, as were sapling heights and stem diameters. Whereas survivorship of new seedlings decreased by more than one-half with increasing basal area over its range in 2010–2012, that of old seedlings decreased by almost two-thirds, but only in 2008–2010, and was generally unrelated to conspecific seedling density. In 2010–2012 relative growth rates in new seedlings’ heights decreased with increasing basal area, as well as with increasing seedling density, together with increasing leaf numbers, whereas old seedlings’ growth was unrelated to either conspecific density or basal area. Saplings of below-average height had reduced survivorship with increasing basal area (probability decreasing from approx. 0.4 to 0.05 over the basal area range tested), but only sapling growth in terms of leaf numbers decreased with increasing basal area. These static and dynamic results indicate that NDD is operating within this system, possibly stabilizing the M. bisulcata population. However, these NDD patterns are unlikely to be caused by symmetric competition or by consumers. Instead, an alternative mechanism for conspecific adult–juvenile negative feedback is proposed, one which involves the interaction between tree phenology and ectomycorrhizal linkages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Myxococcus xanthus is a Gram-negative soil bacterium that undergoes multicellular development when high-density cells are starved on a solid surface. Expression of the 4445 gene, predicted to encode a periplasmic protein, commences 1.5 h after the initiation of development and requires starvation and high density conditions. Addition of crude or boiled supernatant from starving high-density cells restored 4445 expression to starving low-density cells. Addition of L-threonine or L-isoleucine to starving low-density cells also restored 4445 expression, indicating that the high-density signaling activity present in the supernatant might be composed of extracellular amino acids or small peptides. To investigate the circuitry integrating these starvation and high-density signals, the cis- and trans-acting elements controlling 4445 expression were identified. The 4445 transcription start site was determined by primer extension analysis to be 58 by upstream of the predicted translation start site. The promoter region contained a consensus sequence characteristic of e&barbelow;xtrac&barbelow;ytoplasmic f&barbelow;unction (ECF) sigma factor-dependent promoters, suggesting that 4445 expression might be regulated by an ECF sigma factor-dependent pathway, which are known to respond to envelope stresses. The small size of the minimum regulatory region, identified by 5′-end deletion analysis as being only 66 by upstream of the transcription start site, suggests that RNA polymerase could be the sole direct regulator of 4445 expression. To identify trans-acting negative regulators of 4445 expression, a strain containing a 4445-lacZ was mutagenized using the Himar1-tet transposon. The four transposon insertions characterized mapped to an operon encoding a putative ECF sigma factor, ecfA; an anti-sigma factor, reaA; and a negative regulator, reaB. The reaA and the reaB mutants expressed 4445 during growth and development at levels almost 100-fold higher than wild type, indicating that these genes encode negative regulators. The ecfA mutant expressed 4445-lacZ at basal levels, indicating that ecfA is a positive regulator. High Mg2+ concentrations over-stimulated this ecfA pathway possibly due to the depletion of exopolysaccharides and assembled type IV pili. These data indicate that the ecfA operon encodes a new regulatory stress pathway that integrates and transduces starvation and cell density cues during early development and is also responsive to cell-surface alterations.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The large discrepancy between field and laboratory measurements of mineral reaction rates is a long-standing problem in earth sciences, often attributed to factors extrinsic to the mineral itself. Nevertheless, differences in reaction rate are also observed within laboratory measurements, raising the possibility of intrinsic variations as well. Critical insight is available from analysis of the relationship between the reaction rate and its distribution over the mineral surface. This analysis recognizes the fundamental variance of the rate. The resulting anisotropic rate distributions are completely obscured by the common practice of surface area normalization. In a simple experiment using a single crystal and its polycrystalline counterpart, we demonstrate the sensitivity of dissolution rate to grain size, results that undermine the use of "classical" rate constants. Comparison of selected published crystal surface step retreat velocities (Jordan and Rammensee, 1998) as well as large single crystal dissolution data (Busenberg and Plummer, 1986) provide further evidence of this fundamental variability. Our key finding highlights the unsubstantiated use of a single-valued "mean" rate or rate constant as a function of environmental conditions. Reactivity predictions and long-term reservoir stability calculations based on laboratory measurements are thus not directly applicable to natural settings without a probabilistic approach. Such a probabilistic approach must incorporate both the variation of surface energy as a general range (intrinsic variation) as well as constraints to this variation owing to the heterogeneity of complex material (e.g., density of domain borders). We suggest the introduction of surface energy spectra (or the resulting rate spectra) containing information about the probability of existing rate ranges and the critical modes of surface energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high-resolution marine isotope climate record indicates pronounced global cooling during the Langhian (16-13.8 Ma), beginning with the warm middle Miocene climatic optimum and ending with significant Antarctic ice sheet expansion and the transition to "icehouse" conditions. Terrestrial paleoclimate data from this interval is sparse and sometimes conflicting. In particular, there are gaps in the terrestrial record in the Pacific Northwest during the late Langhian and early Serravallian between about 14.5 and 12.5 Ma. New terrestrial paleoclimate data from this time and region could reconcile these conflicting records. Paleosols are particularly useful for reconstructing paleoenvironment because the rate and style of pedogenesis is primarily a function of surface environmental conditions; however, complete and well-preserved paleosols are uncommon. Most soils form in erosive environments that are not preserved, or in environments such as floodplains that accumulate in small increments; the resulting cumulic soils are usually thin, weakly developed, and subject to diagenetic overprinting from subsequent soils. The paleosol at Cricket Flat in northeastern Oregon is an unusually complete and well-preserved paleosol from a middle Miocene volcanic sequence in the Powder River Volcanic Field. An olivine basalt flow buried the paleosol at approximately 13.8 ± 0.6 Ma, based on three 40Ar/39Ar dates on the basalt. We described the Cricket Flat paleosol and used its physical and chemical profile and micromorphology to assess pedogenesis. The Cricket Flat paleosol is an Ultisol-like paleosol, chemically consistent with a high degree of weathering. Temperature and rainfall proxies suggest that Cricket Flat received 1120 ± 180 mm precipitation y-1 and experienced a mean annual temperature of 14.5 ± 2.1 °C during the formation of the paleosol, significantly warmer and wetter than today. This suggests slower cooling after the middle Miocene climatic optimum than is seen in the existing paleosol record.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wet bulk density is one of the most important parameters of the physical and geological properties of marine sediments. The density is connected directly with sedimentation history and a few sedirnent properties. Knowledge of the fine scale density-depth structure is the base for many model calculations, for both sedimentological and palaeoclimatic research. A density measurement system was designed and built at the Alfred Wegener Institute in Bremerhaven for measuring the wet buk density of sediment cores with high resolution in a non-destructive way. The density is deterrnined by measuring the absorption of Gamma-rays in the sediment. This principle has been used since the 50's in materials research and in the geosciences. In the present case, Cs137 is used as the radioactive source and the intensity is measured by a detector system (scintillator and photomultiplier). Density values are obtainable in both longitudinal core sections and planar cross-sections (the latter are a function of the axial rotation angle). Special studies on inhomogenity can be applied with core rotation. Detection of ice rafted debris (IRD) is made possible with this option. The processes that run the density measurement system are computer controlled. Besides the absorption measurement the core diameter at every measurement point is determined with a potentiometric system. The data values taken are stored on a personal computer. Before starting routine measurements on the sediment cores, a few experiments conceming the statistical aspects of the gamma-ray signal and its accuracy were carried out. These experiments led to such things as the optimum operational parameters. A high spatial resolution in the mm-range is possible with the 4mm-thin gamma-ray measurements. Within five seconds the wet bulk density can be deterrnined with an absolute accuracy of 1%. A comparison between data measured with the new system and conventional measurements on core samples after core splitting shows an agreement within +I- 5% for most of the values. For this thesis, density determinations were carried out on ten sediment cores. A few sediment characteristics are obtainable from using just the standard measurement results without core rotation. In addition to differentes and steps in the absolute density range, variations in the "frequency" of the density-depth structure can be detected due to the close spatial measurement interval and high resolution. Examples from measurements with small (9°) and great (90°) angle increments show that abrupt and smooth transitional changes of sedirnent layers as well as ice rafted debris of several dimensions can be detected and distiflguished clearly. After the presentation of the wet bulk density results, a comparison with data from other investigations was made. Measurements of the electrical resistivity correlated very well with the density data because both parameters are closely related to the porosity of the sedirnent. Additionally, results from measurements of the magnetic susceptibility and from ultra-sonic wave velocity investigations were considered for a integrative interpretation. The correlation of these both parameters and wet bulk density data is strongly dependent on the local (environmental) conditions. Finally, the densities were compared with recordings from sediment-echographic soundings and an X-ray computer tomography analysis. The individual results of all investigations were then finally combined into an accurate picture of the core. Problems of ambiguity, which exist when just one Parameter is determined alone, can be reduced more or less according to the number of parameters and sedimentary characteristics measured. The important role of the density data among other parameters of such an integrated interpretation is evident. Evidence of this role include the high resolution of the measurement, the excellent accuracy and the key position within methods and parameters concerning marine sediments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimum quality that can be asymptotically achieved in the estimation of a probability p using inverse binomial sampling is addressed. A general definition of quality is used in terms of the risk associated with a loss function that satisfies certain assumptions. It is shown that the limit superior of the risk for p asymptotically small has a minimum over all (possibly randomized) estimators. This minimum is achieved by certain non-randomized estimators. The model includes commonly used quality criteria as particular cases. Applications to the non-asymptotic regime are discussed considering specific loss functions, for which minimax estimators are derived.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural regeneration is an ecological key-process that makes plant persistence possible and, consequently, it constitutes an essential element of sustainable forest management. In this respect, natural regeneration in even-aged stands of Pinus pinea L. located in the Spanish Northern Plateau has not always been successfully achieved despite over a century of pine nut-based management. As a result, natural regeneration has recently become a major concern for forest managers when we are living a moment of rationalization of investment in silviculture. The present dissertation is addressed to provide answers to forest managers on this topic through the development of an integral regeneration multistage model for P. pinea stands in the region. From this model, recommendations for natural regeneration-based silviculture can be derived under present and future climate scenarios. Also, the model structure makes it possible to detect the likely bottlenecks affecting the process. The integral model consists of five submodels corresponding to each of the subprocesses linking the stages involved in natural regeneration (seed production, seed dispersal, seed germination, seed predation and seedling survival). The outputs of the submodels represent the transitional probabilities between these stages as a function of climatic and stand variables, which in turn are representative of the ecological factors driving regeneration. At subprocess level, the findings of this dissertation should be interpreted as follows. The scheduling of the shelterwood system currently conducted over low density stands leads to situations of dispersal limitation since the initial stages of the regeneration period. Concerning predation, predator activity appears to be only limited by the occurrence of severe summer droughts and masting events, the summer resulting in a favourable period for seed survival. Out of this time interval, predators were found to almost totally deplete seed crops. Given that P. pinea dissemination occurs in summer (i.e. the safe period against predation), the likelihood of a seed to not be destroyed is conditional to germination occurrence prior to the intensification of predator activity. However, the optimal conditions for germination seldom take place, restraining emergence to few days during the fall. Thus, the window to reach the seedling stage is narrow. In addition, the seedling survival submodel predicts extremely high seedling mortality rates and therefore only some individuals from large cohorts will be able to persist. These facts, along with the strong climate-mediated masting habit exhibited by P. pinea, reveal that viii the overall probability of establishment is low. Given this background, current management –low final stand densities resulting from intense thinning and strict felling schedules– conditions the occurrence of enough favourable events to achieve natural regeneration during the current rotation time. Stochastic simulation and optimisation computed through the integral model confirm this circumstance, suggesting that more flexible and progressive regeneration fellings should be conducted. From an ecological standpoint, these results inform a reproductive strategy leading to uneven-aged stand structures, in full accordance with the medium shade-tolerant behaviour of the species. As a final remark, stochastic simulations performed under a climate-change scenario show that regeneration in the species will not be strongly hampered in the future. This resilient behaviour highlights the fundamental ecological role played by P. pinea in demanding areas where other tree species fail to persist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.