40 resultados para Framework Model
Resumo:
Functional neuroimaging has undergone spectacular developments in recent years. Paradoxically, its neurobiological bases have remained elusive, resulting in an intense debate around the cellular mechanisms taking place upon activation that could contribute to the signals measured. Taking advantage of a modeling approach, we propose here a coherent neurobiological framework that not only explains several in vitro and in vivo observations but also provides a physiological basis to interpret imaging signals. First, based on a model of compartmentalized energy metabolism, we show that complex kinetics of NADH changes observed in vitro can be accounted for by distinct metabolic responses in two cell populations reminiscent of neurons and astrocytes. Second, extended application of the model to an in vivo situation allowed us to reproduce the evolution of intraparenchymal oxygen levels upon activation as measured experimentally without substantially altering the initial parameter values. Finally, applying the same model to functional neuroimaging in humans, we were able to determine that the early negative component of the blood oxygenation level-dependent response recorded with functional MRI, known as the initial dip, critically depends on the oxidative response of neurons, whereas the late aspects of the signal correspond to a combination of responses from cell types with two distinct metabolic profiles that could be neurons and astrocytes. In summary, our results, obtained with such a modeling approach, support the concept that both neuronal and glial metabolic responses form essential components of neuroimaging signals.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
Forensic intelligence is a distinct dimension of forensic science. Forensic intelligence processes have mostly been developed to address either a specific type of trace or a specific problem. Even though these empirical developments have led to successes, they are trace-specific in nature and contribute to the generation of silos which hamper the establishment of a more general and transversal model. Forensic intelligence has shown some important perspectives but more general developments are required to address persistent challenges. This will ensure the progress of the discipline as well as its widespread implementation in the future. This paper demonstrates that the description of forensic intelligence processes, their architectures, and the methods for building them can, at a certain level, be abstracted from the type of traces considered. A comparative analysis is made between two forensic intelligence approaches developed independently in Australia and in Europe regarding the monitoring of apparently very different kind of problems: illicit drugs and false identity documents. An inductive effort is pursued to identify similarities and to outline a general model. Besides breaking barriers between apparently separate fields of study in forensic science and intelligence, this transversal model would assist in defining forensic intelligence, its role and place in policing, and in identifying its contributions and limitations. The model will facilitate the paradigm shift from the current case-by-case reactive attitude towards a proactive approach by serving as a guideline for the use of forensic case data in an intelligence-led perspective. A follow-up article will specifically address issues related to comparison processes, decision points and organisational issues regarding forensic intelligence (part II).
Resumo:
Abstract: This article presents both a brief systemic intervention method (IBS) consisting in 6 sessions developed in an ambulatory service for couples and families, and two research projects done in collaboration with the Institute for Psychotherapy of the University of Lausanne. The first project is quantitative and it aims at evaluating the effectiveness of ISB. One of its main feature is that outcomes are assessed at different levels of individual and family functioning: 1) symptoms and individual functioning; 2) quality of marital relationship; 3) parental and co-parental relationships; 4) familial relationships. The second project is a qualitative case study about a marital therapy which identifies and analyses significant moments of the therapeutic process from the patients' perspective. Methodology was largely inspired by Daniel Stem's work about "moments of meeting" in psychotherapy. Results show that patients' theories about relationship and change are important elements that deepen our understanding of the change process in couple and family therapy. The interest of associating clinicians and researchers for the development and validation of a new clinical model is discussed.
Resumo:
Cell elongation during seedling development is antagonistically regulated by light and gibberellins (GAs). Light induces photomorphogenesis, leading to inhibition of hypocotyl growth, whereas GAs promote etiolated growth, characterized by increased hypocotyl elongation. The mechanism underlying this antagonistic interaction remains unclear. Here we report on the central role of the Arabidopsis thaliana nuclear transcription factor PIF4 (encoded by PHYTOCHROME INTERACTING FACTOR 4) in the positive control of genes mediating cell elongation and show that this factor is negatively regulated by the light photoreceptor phyB (ref. 4) and by DELLA proteins that have a key repressor function in GA signalling. Our results demonstrate that PIF4 is destabilized by phyB in the light and that DELLAs block PIF4 transcriptional activity by binding the DNA-recognition domain of this factor. We show that GAs abrogate such repression by promoting DELLA destabilization, and therefore cause a concomitant accumulation of free PIF4 in the nucleus. Consistent with this model, intermediate hypocotyl lengths were observed in transgenic plants over-accumulating both DELLAs and PIF4. Destabilization of this factor by phyB, together with its inactivation by DELLAs, constitutes a protein interaction framework that explains how plants integrate both light and GA signals to optimize growth and development in response to changing environments.
Resumo:
AbstractDigitalization gives to the Internet the power by allowing several virtual representations of reality, including that of identity. We leave an increasingly digital footprint in cyberspace and this situation puts our identity at high risks. Privacy is a right and fundamental social value that could play a key role as a medium to secure digital identities. Identity functionality is increasingly delivered as sets of services, rather than monolithic applications. So, an identity layer in which identity and privacy management services are loosely coupled, publicly hosted and available to on-demand calls could be more realistic and an acceptable situation. Identity and privacy should be interoperable and distributed through the adoption of service-orientation and implementation based on open standards (technical interoperability). Ihe objective of this project is to provide a way to implement interoperable user-centric digital identity-related privacy to respond to the need of distributed nature of federated identity systems. It is recognized that technical initiatives, emerging standards and protocols are not enough to guarantee resolution for the concerns surrounding a multi-facets and complex issue of identity and privacy. For this reason they should be apprehended within a global perspective through an integrated and a multidisciplinary approach. The approach dictates that privacy law, policies, regulations and technologies are to be crafted together from the start, rather than attaching it to digital identity after the fact. Thus, we draw Digital Identity-Related Privacy (DigldeRP) requirements from global, domestic and business-specific privacy policies. The requirements take shape of business interoperability. We suggest a layered implementation framework (DigldeRP framework) in accordance to model-driven architecture (MDA) approach that would help organizations' security team to turn business interoperability into technical interoperability in the form of a set of services that could accommodate Service-Oriented Architecture (SOA): Privacy-as-a-set-of- services (PaaSS) system. DigldeRP Framework will serve as a basis for vital understanding between business management and technical managers on digital identity related privacy initiatives. The layered DigldeRP framework presents five practical layers as an ordered sequence as a basis of DigldeRP project roadmap, however, in practice, there is an iterative process to assure that each layer supports effectively and enforces requirements of the adjacent ones. Each layer is composed by a set of blocks, which determine a roadmap that security team could follow to successfully implement PaaSS. Several blocks' descriptions are based on OMG SoaML modeling language and BPMN processes description. We identified, designed and implemented seven services that form PaaSS and described their consumption. PaaSS Java QEE project), WSDL, and XSD codes are given and explained.
Resumo:
Species distribution models (SDMs) studies suggest that, without control measures, the distribution of many alien invasive plant species (AIS) will increase under climate and land-use changes. Due to limited resources and large areas colonised by invaders, management and monitoring resources must be prioritised. Choices depend on the conservation value of the invaded areas and can be guided by SDM predictions. Here, we use a hierarchical SDM framework, complemented by connectivity analysis of AIS distributions, to evaluate current and future conflicts between AIS and high conservation value areas. We illustrate the framework with three Australian wattle (Acacia) species and patterns of conservation value in Northern Portugal. Results show that protected areas will likely suffer higher pressure from all three Acacia species under future climatic conditions. Due to this higher predicted conflict in protected areas, management might be prioritised for Acacia dealbata and Acacia melanoxylon. Connectivity of AIS suitable areas inside protected areas is currently lower than across the full study area, but this would change under future environmental conditions. Coupled SDM and connectivity analysis can support resource prioritisation for anticipation and monitoring of AIS impacts. However, further tests of this framework over a wide range of regions and organisms are still required before wide application.
Resumo:
Empirical literature on the analysis of the efficiency of measures for reducing persistent government deficits has mainly focused on the direct explanation of deficit. By contrast, this paper aims at modeling government revenue and expenditure within a simultaneous framework and deriving the fiscal balance (surplus or deficit) equation as the difference between the two variables. This setting enables one to not only judge how relevant the explanatory variables are in explaining the fiscal balance but also understand their impact on revenue and/or expenditure. Our empirical results, obtained by using a panel data set on Swiss Cantons for the period 1980-2002, confirm the relevance of the approach followed here, by providing unambiguous evidence of a simultaneous relationship between revenue and expenditure. They also reveal strong dynamic components in revenue, expenditure, and fiscal balance. Among the significant determinants of public fiscal balance we not only find the usual business cycle elements, but also and more importantly institutional factors such as the number of administrative units, and the ease with which people can resort to political (direct democracy) instruments, such as public initiatives and referendum.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
This paper aims to provide empirical support for the use of the principal-agent framework in the analysis of public sector and public policies. After reviewing the different conditions to be met for a relevant analysis of the relationship between population and government using the principal-agent theory, our paper focuses on the assumption of conflicting goals between the principal and the agent. A principal-agent analysis assumes in effect that inefficiencies may arise because principal and agent pursue different goals. Using data collected during an amalgamation project of two Swiss municipalities, we show the existence of a gap between the goals of the population and those of the government. Consequently, inefficiencies as predicted by the principal-agent model may arise during the implementation of a public policy, i.e. an amalgamation project. In a context of direct democracy where policies are regularly subjected to referendum, the conflict of objectives may even lead to a total failure of the policy at the polls.
Resumo:
Rare species have restricted geographic ranges, habitat specialization, and/or small population sizes. Datasets on rare species distribution usually have few observations, limited spatial accuracy and lack of valid absences; conversely they provide comprehensive views of species distributions allowing to realistically capture most of their realized environmental niche. Rare species are the most in need of predictive distribution modelling but also the most difficult to model. We refer to this contrast as the "rare species modelling paradox" and propose as a solution developing modelling approaches that deal with a sufficiently large set of predictors, ensuring that statistical models aren't overfitted. Our novel approach fulfils this condition by fitting a large number of bivariate models and averaging them with a weighted ensemble approach. We further propose that this ensemble forecasting is conducted within a hierarchic multi-scale framework. We present two ensemble models for a test species, one at regional and one at local scale, each based on the combination of 630 models. In both cases, we obtained excellent spatial projections, unusual when modelling rare species. Model results highlight, from a statistically sound approach, the effects of multiple drivers in a same modelling framework and at two distinct scales. From this added information, regional models can support accurate forecasts of range dynamics under climate change scenarios, whereas local models allow the assessment of isolated or synergistic impacts of changes in multiple predictors. This novel framework provides a baseline for adaptive conservation, management and monitoring of rare species at distinct spatial and temporal scales.