980 resultados para GAUSSIAN-BASIS SET


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study pathwise invariances and degeneracies of random fields with motivating applications in Gaussian process modelling. The key idea is that a number of structural properties one may wish to impose a priori on functions boil down to degeneracy properties under well-chosen linear operators. We first show in a second order set-up that almost sure degeneracy of random field paths under some class of linear operators defined in terms of signed measures can be controlled through the two first moments. A special focus is then put on the Gaussian case, where these results are revisited and extended to further linear operators thanks to state-of-the-art representations. Several degeneracy properties are tackled, including random fields with symmetric paths, centred paths, harmonic paths, or sparse paths. The proposed approach delivers a number of promising results and perspectives in Gaussian process modelling. In a first numerical experiment, it is shown that dedicated kernels can be used to infer an axis of symmetry. Our second numerical experiment deals with conditional simulations of a solution to the heat equation, and it is found that adapted kernels notably enable improved predictions of non-linear functionals of the field such as its maximum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the processes involved in rational patient targeting may be obvious for certain services, for others, both the appropriate sub-populations to receive services and the procedures to be used for their identification may be unclear. This project was designed to address several research questions which arise in the attempt to deliver appropriate services to specific populations. The related difficulties are particularly evident for those interventions about which findings regarding effectiveness are conflicting. When an intervention clearly is not beneficial (or is dangerous) to a large, diverse population, consensus regarding withholding the intervention from dissemination can easily be reached. When findings are ambiguous, however, conclusions may be impossible.^ When characteristics of patients likely to benefit from an intervention are not obvious, and when the intervention is not significantly invasive or dangerous, the strategy proposed herein may be used to identify specific characteristics of sub-populations which may benefit from the intervention. The identification of these populations may be used both in further informing decisions regarding distribution of the intervention and for purposes of planning implementation of the intervention by identifying specific target populations for service delivery.^ This project explores a method for identifying such sub-populations through the use of related datasets generated from clinical trials conducted to test the effectiveness of an intervention. The method is specified in detail and tested using the example intervention of case management for outpatient treatment of populations with chronic mental illness. These analyses were applied in order to identify any characteristics which distinguish specific sub-populations who are more likely to benefit from case management service, despite conflicting findings regarding its effectiveness for the aggregate population, as reported in the body of related research. However, in addition to a limited set of characteristics associated with benefit, the findings generated, a larger set of characteristics of patients likely to experience greater improvement without intervention. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a differentiated carbonate budget for marine surface sediments from the Mid-Atlantic Ridge of the South Atlantic, with results based on carbonate grain-size composition. Upon separation into sand, silt, and clay sub-fractions, the silt grain-size distribution was measured using a SediGraph 5100. We found regionally characteristic grain-size distributions with an overall minimum at 8 µm equivalent spherical diameter (ESD). SEM observations reveal that the coarse particles (>8 µm ESD) are attributed to planktic foraminifers and their fragments, and the fine particles (<8 µm ESD) to coccoliths. On the basis of this division, the regional variation of the contribution of foraminifers and coccoliths to the carbonate budget of the sediments are calculated. Foraminifer carbonate dominates the sediments in mesotropic regions whereas coccoliths contribute most carbonate in oligotrophic regions. The grain size of the coccolith share is constant over water depth, indicating a lower susceptibility for carbonate dissolution compared to foraminifers. Finally, the characteristic grain-size distribution in fine silt (<8 µm ESD) is set into context with the coccolith assemblage counted and biometrically measured using a SEM. The coccoliths present in the silt fraction are predominantly large species (length > 4 µm). Smaller species (length < 4 µm) belong to the clay fraction (<2 µm ESD). The average length of most frequent coccolith species is connected to prominent peaks in grain-size distributions (ESD) with a shape factor. The area below Gaussian distributions fitted to these peaks is suggested as a way to quantitatively estimate the carbonate contribution of single coccolith species more precisely compared to conventional volume estimates. The quantitative division of carbonate into the fraction produced by coccoliths and that secreted by foraminifers enables a more precise estimate for source/sink relations of consumed and released CO2 in the carbon cycle. The allocation of coccolith length and grain size (ESD) suggests size windows for the separation or accumulation of distinct coccolith species in investigations that depend on non to slightly-mixed signals (e.g., isotopic studies).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The perceived speed of motion in one part of the visual field is influenced by the speed of motion in its surrounding fields. Little is known about the cellular mechanisms causing this phenomenon. Recordings from mammalian visual cortex revealed that speed preference of the cortical cells could be changed by displaying a contrast speed in the field surrounding the cell’s classical receptive field. The neuron’s selectivity shifted to prefer faster speed if the contextual surround motion was set at a relatively lower speed, and vice versa. These specific center–surround interactions may underlie the perceptual enhancement of speed contrast between adjacent fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spatial data set delineates areas with similar environmental properties regarding soil, terrain morphology, climate and affiliation to the same administrative unit (NUTS3 or comparable units in size) at a minimum pixel size of 1km2. The scope of developing this data set is to provide a link between spatial environmental information (e.g. soil properties) and statistical data (e.g. crop distribution) available at administrative level. Impact assessment of agricultural management on emissions of pollutants or radiative active gases, or analysis regarding the influence of agricultural management on the supply of ecosystem services, require the proper spatial coincidence of the driving factors. The HSU data set provides e.g. the link between the agro-economic model CAPRI and biophysical assessment of environmental impacts (updating previously spatial units, Leip et al. 2008), for the analysis of policy scenarios. Recently, a statistical model to disaggregate crop information available from regional statistics to the HSU has been developed (Lamboni et al. 2016). The HSU data set consists of the spatial layers provided in vector and raster format as well as attribute tables with information on the properties of the HSU. All input data for the delineation the HSU is publicly available. For some parameters the attribute tables provide the link between the HSU data set and e.g. the soil map(s) rather than the data itself. The HSU data set is closely linked the USCIE data set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physiognomic traits of plant leaves such as size, shape or margin are decisively affected by the prevailing environmental conditions of the plant habitat. On the other hand, if a relationship between environment and leaf physiognomy can be shown to exist, vegetation represents a proxy for environmental conditions. This study investigates the relationship between physiognomic traits of leaves from European hardwood vegetation and environmental parameters in order to create a calibration dataset based on high resolution grid cell data. The leaf data are obtained from synthetic chorologic floras, the environmental data comprise climatic and ecologic data. The high resolution of the data allows for a detailed analysis of the spatial dependencies between the investigated parameters. The comparison of environmental parameters and leaf physiognomic characters reveals a clear correlation between temperature related parameters (e.g. mean annual temperature or ground frost frequency) and the expression of leaf characters (e.g. the type of leaf margin or the base of the lamina). Precipitation related parameters (e.g. mean annual precipitation), however, show no correlation with the leaf physiognomic composition of the vegetation. On the basis of these results, transfer functions for several environmental parameters are calculated from the leaf physiognomic composition of the extant vegetation. In a next step, a cluster analysis is applied to the dataset in order to identify "leaf physiognomic communities". Several of these are distinguished, characterised and subsequently used for vegetation classification. Concerning the leaf physiognomic diversity there are precise differences between each of these "leaf physiognomic classes". There is a clear increase of leaf physiognomic diversity with increasing variability of the environmental parameters: Northern vegetation types are characterised by a more or less homogeneous leaf physiognomic composition whereas southern vegetation types like the Mediterranean vegetation show a considerable higher leaf physiognomic diversity. Finally, the transfer functions are used to estimate palaeo-environmental parameters of three fossil European leaf assemblages from Late Oligocene and Middle Miocene. The results are compared with results obtained from other palaeo-environmental reconstructing methods. The estimates based on a direct linear ordination seem to be the most realistic ones, as they are highly consistent with the Coexistence Approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a Gaussian quantum operator representation, using the most general possible multimode Gaussian operator basis. The representation unifies and substantially extends existing phase-space representations of density matrices for Bose systems and also includes generalized squeezed-state and thermal bases. It enables first-principles dynamical or equilibrium calculations in quantum many-body systems, with quantum uncertainties appearing as dynamical objects. Any quadratic Liouville equation for the density operator results in a purely deterministic time evolution. Any cubic or quartic master equation can be treated using stochastic methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The large number of protein kinases makes it impractical to determine their specificities and substrates experimentally. Using the available crystal structures, molecular modeling, and sequence analyses of kinases and substrates, we developed a set of rules governing the binding of a heptapeptide substrate motif (surrounding the phosphorylation site) to the kinase and implemented these rules in a web-interfaced program for automated prediction of optimal substrate peptides, taking only the amino acid sequence of a protein kinase as input. We show the utility of the method by analyzing yeast cell cycle control and DNA damage checkpoint pathways. Our method is the only available predictive method generally applicable for identifying possible substrate proteins for protein serine/threonine kinases and helps in silico construction of signaling pathways. The accuracy of prediction is comparable to the accuracy of data from systematic large-scale experimental approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The OARSI Standing Committee for Clinical Trials Response Criteria Initiative had developed two sets of responder criteria to present the results of changes after treatment in three symptomatic domains (pain, function, and patient's global assessment) as a single variable for clinical trials (1). For each domain, a response was defined by both a relative and an absolute change, with different cut-offs with regard to the drug, the route of administration and the OA localization. Objective: To propose a simplified set of responder criteria with a similar cut-off, whatever the drug, the route or the OA localization. Methods: Data driven approach: (1) Two databases were considered The 'elaboration' database with which the formal OARSI sets of responder criteria were elaborated and The 'revisit' database. (2) Six different scenarios were evaluated: The two formal OARSI sets of criteria Four proposed scenarios of simplified sets of criteria Data from clinical randomized blinded placebo controlled trials were used to evaluate the performances of the two formal scenarios with two different databases ('elaboration' versus 'revisit') and those of the four proposed simplified scenarios within the 'revisit' database. The placebo effect, active effect, treatment effect, and the required sample arm size to obtain the placebo effect and the active treatment effect observed were the performances evaluated for each of the six scenarios. Experts' opinion approach: Results were discussed among the participants of the OMERACT VI meeting, who voted to select the definite OMERACT-OARSI set of criteria (one of the six evaluated scenarios). Results: Data driven approach: Fourteen trials totaling 1886 CA patients and fifteen studies involving 8164 CA patients were evaluated in the 'elaboration' and the 'revisit' databases respectively. The variability of the performances observed in the 'revisit' database when using the different simplified scenarios was similar to that observed between the two databases ('elaboration' versus 'revisit') when using the formal scenarios. The treatment effect and the required sample arm size were similar for each set of criteria. Experts' opinion approach: According to the experts, these two previous performances were the most important of an optimal set of responder criteria. They chose the set of criteria considering both pain and function as evaluation domain and requiring an absolute change and a relative change from baseline to define a response, with similar cut-offs whatever the drug, the route of administration or the CA localization. Conclusion: This data driven and experts' opinion approach is the basis for proposing an optimal simplified set of responder criteria for CA clinical trials. Other studies, using other sets of CA patients, are required in order to further validate this proposed OMERACT - OARSI set of criteria. (C) 2004 OsteoArthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The random switching of measurement bases is commonly assumed to be a necessary step of quantum key distribution protocols. In this paper we present a no-switching protocol and show that switching is not required for coherent-state continuous-variable quantum key distribution. Further, this protocol achieves higher information rates and a simpler experimental setup compared to previous protocols that rely on switching. We propose an optimal eavesdropping attack against this protocol, assuming individual Gaussian attacks. Finally, we investigate and compare the no-switching protocol applied to the original Bennett-Brassard 1984 scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a positive phase-space representation for fermions, using the most general possible multimode Gaussian operator basis. The representation generalizes previous bosonic quantum phase-space methods to Fermi systems. We derive equivalences between quantum and stochastic moments, as well as operator correspondences that map quantum operator evolution onto stochastic processes in phase space. The representation thus enables first-principles quantum dynamical or equilibrium calculations in many-body Fermi systems. Potential applications are to strongly interacting and correlated Fermi gases, including coherent behavior in open systems and nanostructures described by master equations. Examples of an ideal gas and the Hubbard model are given, as well as a generic open system, in order to illustrate these ideas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore the dependence of performance measures, such as the generalization error and generalization consistency, on the structure and the parameterization of the prior on `rules', instanced here by the noisy linear perceptron. Using a statistical mechanics framework, we show how one may assign values to the parameters of a model for a `rule' on the basis of data instancing the rule. Information about the data, such as input distribution, noise distribution and other `rule' characteristics may be embedded in the form of general gaussian priors for improving net performance. We examine explicitly two types of general gaussian priors which are useful in some simple cases. We calculate the optimal values for the parameters of these priors and show their effect in modifying the most probable, MAP, values for the rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.