979 resultados para CONSENSUS MODEL
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
This in-depth study of the decision-making processes of the early 2000s shows that the Swiss consensus democracy has changed considerably. Power relations have transformed, conflict has increased, coalitions have become more unstable and outputs less predictable. Yet these challenges to consensus politics provide opportunities for innovation.
Resumo:
P2Y(1) is an ADP-activated G protein-coupled receptor (GPCR). Its antagonists impede platelet aggregation in vivo and are potential antithrombotic agents. Combining ligand and structure-based modeling we generated a consensus model (LIST-CM) correlating antagonist structures with their potencies. We docked 45 antagonists into our rhodopsin-based human P2Y(1) homology model and calculated docking scores and free binding energies with the Linear Interaction Energy (LIE) method in continuum-solvent. The resulting alignment was also used to build QSAR based on CoMFA, CoMSIA, and molecular descriptors. To benefit from the strength of each technique and compensate for their limitations, we generated our LIST-CM with a PLS regression based on the predictions of each methodology. A test set featuring untested substituents was synthesized and assayed in inhibition of 2-MeSADP-stimulated PLC activity and in radioligand binding. LIST-CM outperformed internal and external predictivity of any individual model to predict accurately the potency of 75% of the test set.
Resumo:
This paper studies the theoretical and empirical implications of monetary policy making by committee under three different voting protocols. The protocols are a consensus model, where super-majority is required for a policy change; an agenda-setting model, where the chairman controls the agenda; and a simple majority model, where policy is determined by the median member. These protocols give preeminence to different aspects of the actual decision making process and capture the observed heterogeneity in formal procedures across central banks. The models are estimated by Maximum Likehood using interest rate decisions by the committees of five central banks, namely the Bank of Canada, the Bank of England, the European Central Bank, the Swedish Riksbank, and the U.S. Federal Reserve. For all central banks, results indicate that the consensus model is statically superior to the alternative models. This suggests that despite institutionnal differences, committees share unwritten rules and informal procedures that deliver observationally equivalent policy decisions.
Resumo:
Neuronal morphology is hugely variable across brain regions and species, and their classification strategies are a matter of intense debate in neuroscience. GABAergic cortical interneurons have been a challenge because it is difficult to find a set of morphological properties which clearly define neuronal types. A group of 48 neuroscience experts around the world were asked to classify a set of 320 cortical GABAergic interneurons according to the main features of their three-dimensional morphological reconstructions. A methodology for building a model which captures the opinions of all the experts was proposed. First, one Bayesian network was learned for each expert, and we proposed an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts was induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts was built. A thorough analysis of the consensus model identified different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types was defined by performing inference in the Bayesian multinet. These findings were used to validate the model and to gain some insights into neuron morphology.
Resumo:
The previous chapter uncovered important differences between decision-making structures across the 11 processes investigated by this study. As we have noted, both historically and in much contemporary literature, the Swiss political system has been described as highly consensual. And yet, when we focus on differences between decision-making structures across different policy domains, important elements appear that point toward a more conflictual style of decision-making. Both when there is a power balance between coalitions and in the presence of a dominant coalition, coalition interactions are conflictual in the majority of cases. Based on the descriptive account of these differences in Chapter 4, the present chapter studies the conditions under which given decision-making structures emerge. Under which circumstances are actors able to form a dominant coalition, and which conditions lead to a situation where power is more evenly balanced between coalitions? Which conditions lead actors to develop a conflictual rather than a consensual type of interaction? Answering these questions can give us some indication of the factors responsible for different types of decision-making structures.
Resumo:
Purpose – The purpose of this paper is to explore the similarities and differences of legal responses to older adults who may be at risk of harm or abuse in the UK, Ireland, Australia and the USA.
Design/methodology/approach – The authors draw upon a review of elder abuse and adult protection undertaken on behalf of the commissioner for older people in Northern Ireland. This paper focusses on the desk top mapping of the different legal approaches and draws upon wider literature to frame the discussion of the relative strengths and weaknesses of the different legal responses.
Findings – Arguments exist both for and against each legal approach. Differences in defining the scope and powers of adult protection legislation in the UK and internationally are highlighted.
Research limitations/implications – This review was undertaken in late 2013; while the authors have updated the mapping to take account of subsequent changes, some statutory guidance is not yet available. While the expertise of a group of experienced professionals in the field of adult safeguarding was utilized,
it was not feasible to employ a formal survey or consensus model.
Practical implications – Some countries have already introduced APL and others are considering doing so. The potential advantages and challenges of introducing APL are highlighted.
Social implications – The introduction of legislation may give professionals increased powers to prevent and reduce abuse of adults, but this would also change the dynamic of relationships within families and between families and professionals.
Originality/value – This paper provides an accessible discussion of APL across the UK and internationally
which to date has been lacking from the literature.
Resumo:
The β2 adrenergic receptor (β2AR) regulates smooth muscle relaxation in the vasculature and airways. Long- and Short-acting β-agonists (LABAs/SABAs) are widely used in treatment of chronic obstructive pulmonary disorder (COPD) and asthma. Despite their widespread clinical use we do not understand well the dominant β2AR regulatory pathways that are stimulated during therapy and bring about tachyphylaxis, which is the loss of drug effects. Thus, an understanding of how the β2AR responds to various β-agonists is crucial to their rational use. Towards that end we have developed deterministic models that explore the mechanism of drug- induced β2AR regulation. These mathematical models can be classified into three classes; (i) Six quantitative models of SABA-induced G protein coupled receptor kinase (GRK)-mediated β2AR regulation; (ii) Three phenomenological models of salmeterol (a LABA)-induced GRK-mediated β2AR regulation; and (iii) One semi-quantitative, unified model of SABA-induced GRK-, protein kinase A (PKA)-, and phosphodiesterase (PDE)-mediated regulation of β2AR signalling. The various models were constrained with all or some of the following experimental data; (i) GRK-mediated β2AR phosphorylation in response to various LABAs/SABAs; (ii) dephosphorylation of the GRK site on the β2AR; (iii) β2AR internalisation; (iv) β2AR recycling; (v) β2AR desensitisation; (vi) β2AR resensitisation; (vii) PKA-mediated β2AR phosphorylation in response to a SABA; and (viii) LABA/SABA induced cAMP profile ± PDE inhibitors. The models of GRK-mediated β2AR regulation show that plasma membrane dephosphorylation and recycling of the phosphorylated β2AR are required to reconcile with the measured dephosphorylation kinetics. We further used a consensus model to predict the consequences of rapid pulsatile agonist stimulation and found that although resensitisation was rapid, the β2AR system retained the memory of prior stimuli and desensitised much more rapidly and strongly in response to subsequent stimuli. This could explain tachyphylaxis of SABAs over repeated use in rescue therapy of asthma patients. The LABA models show that the long action of salmeterol can be explained due to decreased stability of the arrestin/β2AR/salmeterol complex. This could explain long action of β-agonists used in maintenance therapy of asthma patients. Our consensus model of PKA/PDE/GRK-mediated β2AR regulation is being used to identify the dominant β2AR desensitisation pathways under different therapeutic regimens in human airway cells. In summary our models represent a significant advance towards understanding agonist-specific β2AR regulation that will aid in a more rational use of the β2AR agonists in the treatment of asthma.
Resumo:
Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.
Resumo:
Acknowledgment This research is supported by an award made by the RCUK Digital Economy program to the University of Aberdeen’s dot.rural Digital Economy Hub (ref. EP/G066051/1).
Resumo:
In modern democratic systems, usually no single collective actor is able to decisively influence political decision-making. Instead, actors with similar preferences form coalitions in order to gain more influence in the policy process. In the Swiss political system in particular, institutional veto points and the consensual culture of policy-making provide strong incentives for actors to form large coalitions. Coalitions are thus especially important in political decision-making in Switzerland, and are accordingly a central focus of this book. According to one of our core claims - to understand the actual functioning of Swiss consensus democracy - one needs to extend the analysis beyond formal institutions to also include informal procedures and practices. Coalitions of actors play a crucial role in this respect. They are a cornerstone of decision-making structures, and they inform us about patterns of conflict, collaboration and power among actors. Looking at coalitions is all the more interesting in the Swiss political system, since the coalition structure is supposed to vary across policy processes. Given the absence of a fixed government coalition, actors need to form new coalitions in each policy process.
Resumo:
The issue of European integration is of utmost importance for contemporary Swiss politics, as underscored by the presence of three decision-making processes relating to bilateral agreements with the EU, and two additional processes with a strong European dimension (the telecommunication act and the immigration law), among the 11 most important processes of the early 2000s. Previous chapters have highlighted substantial differences between domestic and Europeanized decision-making processes in terms of institutional design and decision-making structures. Chapters 2 and 3 suggest that the peculiarities of the three decision-making processes relating to bilateral agreements go along with specific power configurations among political actors. Chapter 5 draws our attention to the impact of Europeanization on the specific decision-making structure at work in a given policy process.
Resumo:
In most Western countries, the media are said to exert an increasing influence on the political game. This development, which has been described variably as a shift towards an 'audience democracy' (Manin 1995) or the 'mediatization of politics' (Mazzoleni and Schulz 1999), emphasizes the increasing importance of the media for political actors and political decision-making. In such a context, political actors need to communicate with both the media and the public in order to gain support for their policy plans and to influence decision-making. The media were noticeably absent from Kriesi's (1980) in-depth analysis of political decision-making in Switzerland. This suggests that in the early 1970s, the media did not matter or mattered far less than they do today.
Resumo:
The previous chapter presented the overall decision-making structure in Swiss politics at the beginning of the 21st century. This provides us with a general picture and allows for a comparison over time with the decision-making structure in the 1970s. However, the analysis of the overall decision-making structure potentially neglects important differences between policy domains (Atkinson and Coleman 1989; Knoke et al. 1996; Kriesi et al. 2006a; Sabatier 1987). Policy issues vary across policy domains, as do the political actors involved. In addition, actors may hold different policy preferences from one policy domain to the next, and they may also collaborate with other partners depending on the policy domain at stake. Examining differences between policy domains is particularly appropriate in Switzerland. Because no fixed coalitions of government and opposition exist, actors create different coalitions in each policy domain (Linder and Schwarz 2008). Whereas important parts of the institutional setting are similar across policy domains, decision-making structures might still vary. As was the case with the cross-time analysis conducted in the two previous chapters, a stability of 'rules-in-form' might hide important variations in 'rules-in-use' also across different policy domains.
Resumo:
Consensus democracies like Switzerland are generally known to have a low innovation capacity (Lijphart 1999). This is due to the high number of veto points such as perfect bicameralism or the popular referendum. These institutions provide actors opposing a policy with several opportunities to block potential policy change (Immergut 1990; Tsebelis 2002). In order to avoid a failure of a process because opposing actors activate veto points, decision-making processes in Switzerland tend to integrate a large number of actors with different - and often diverging - preferences (Kriesi and Trechsel 2008). Including a variety of actors in a decision-making process and taking into account their preferences implies important trade-offs. Integrating a large number of actors and accommodating their preferences takes time and carries the risk of resulting in lowest common denominator solutions. On the contrary, major innovative reforms usually fail or come only as a result of strong external pressures from either the international environment, economic turmoil or the public (Kriesi 1980: 635f.; Kriesi and Trechsel 2008; Sciarini 1994). Standard decision-making processes are therefore characterized as reactive, slow and capable of only marginal adjustments (Kriesi 1980; Kriesi and Trechsel 2008; Linder 2009; Sciarini 2006). This, in turn, may be at odds with the rapid developments of international politics, the flexibility of the private sector, or the speed of technological development.