918 resultados para Information Models
Resumo:
The aim of this thesis was to investigate the respective contribution of prior information and sensorimotor constraints to action understanding, and to estimate their consequences on the evolution of human social learning. Even though a huge amount of literature is dedicated to the study of action understanding and its role in social learning, these issues are still largely debated. Here, I critically describe two main perspectives. The first perspective interprets faithful social learning as an outcome of a fine-grained representation of others’ actions and intentions that requires sophisticated socio-cognitive skills. In contrast, the second perspective highlights the role of simpler decision heuristics, the recruitment of which is determined by individual and ecological constraints. The present thesis aims to show, through four experimental works, that these two contributions are not mutually exclusive. A first study investigates the role of the inferior frontal cortex (IFC), the anterior intraparietal area (AIP) and the primary somatosensory cortex (S1) in the recognition of other people’s actions, using a transcranial magnetic stimulation adaptation paradigm (TMSA). The second work studies whether, and how, higher-order and lower-order prior information (acquired from the probabilistic sampling of past events vs. derived from an estimation of biomechanical constraints of observed actions) interacts during the prediction of other people’s intentions. Using a single-pulse TMS procedure, the third study investigates whether the interaction between these two classes of priors modulates the motor system activity. The fourth study tests the extent to which behavioral and ecological constraints influence the emergence of faithful social learning strategies at a population level. The collected data contribute to elucidate how higher-order and lower-order prior expectations interact during action prediction, and clarify the neural mechanisms underlying such interaction. Finally, these works provide/open promising perspectives for a better understanding of social learning, with possible extensions to animal models.
Resumo:
Holding the major share of stellar mass in galaxies and being also old and passively evolving, early-type galaxies (ETGs) are the primary probes in investigating these various evolution scenarios, as well as being useful means to provide insights on cosmological parameters. In this thesis work I focused specifically on ETGs and on their capability in constraining galaxy formation and evolution; in particular, the principal aims were to derive some of the ETGs evolutionary parameters, such as age, metallicity and star formation history (SFH) and to study their age-redshift and mass-age relations. In order to infer galaxy physical parameters, I used the public code STARLIGHT: this program provides a best fit to the observed spectrum from a combination of many theoretical models defined in user-made libraries. the comparison between the output and input light-weighted ages shows a good agreement starting from SNRs of ∼ 10, with a bias of ∼ 2.2% and a dispersion 3%. Furthermore, also metallicities and SFHs are well reproduced. In the second part of the thesis I performed an analysis on real data, starting from Sloan Digital Sky Survey (SDSS) spectra. I found that galaxies get older with cosmic time and with increasing mass (for a fixed redshift bin); absolute light-weighted ages, instead, result independent from the fitting parameters or the synthetic models used. Metallicities, instead, are very similar from each other and clearly consistent with the ones derived from the Lick indices. The predicted SFH indicates the presence of a double burst of star formation. Velocity dispersions and extinctiona are also well constrained, following the expected behaviours. As a further step, I also fitted single SDSS spectra (with SNR∼ 20), to verify that stacked spectra gave the same results without introducing any bias: this is an important check, if one wants to apply the method at higher z, where stacked spectra are necessary to increase the SNR. Our upcoming aim is to adopt this approach also on galaxy spectra obtained from higher redshift Surveys, such as BOSS (z ∼ 0.5), zCOSMOS (z 1), K20 (z ∼ 1), GMASS (z ∼ 1.5) and, eventually, Euclid (z 2). Indeed, I am currently carrying on a preliminary study to estabilish the applicability of the method to lower resolution, as well as higher redshift (z 2) spectra, just like the Euclid ones.
Resumo:
Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.
Resumo:
Important insights into the molecular mechanism of T cell extravasation across the blood-brain barrier (BBB) have already been obtained using immortalized mouse brain endothelioma cell lines (bEnd). However, compared with bEnd, primary brain endothelial cells have been shown to establish better barrier characteristics, including complex tight junctions and low permeability. In this study, we asked whether bEnd5 and primary mouse brain microvascular endothelial cells (pMBMECs) were equally suited as in vitro models with which to study the cellular and molecular mechanisms of T cell extravasation across the BBB. We found that both in vitro BBB models equally supported both T cell adhesion under static and physiologic flow conditions, and T cell crawling on the endothelial surface against the direction of flow. In contrast, distances of T cell crawling on pMBMECs were strikingly longer than on bEnd5, whereas diapedesis of T cells across pMBMECs was dramatically reduced compared with bEnd5. Thus, both in vitro BBB models are suited to study T cell adhesion. However, because pMBMECs better reflect endothelial BBB specialization in vivo, we propose that more reliable information about the cellular and molecular mechanisms of T cell diapedesis across the BBB can be attained using pMBMECs.
Resumo:
This paper summarises the discussions which took place at the Workshop on Methodology in Erosion Research in Zürich, 2010, and aims, where possible, to offer guidance for the development and application of both in vitro and in situ models for erosion research. The prospects for clinical trials are also discussed. All models in erosion research require a number of choices regarding experimental conditions, study design and measurement techniques, and these general aspects are discussed first. Among in vitro models, simple (single- or multiple-exposure) models can be used for screening products regarding their erosive potential, while more elaborate pH cycling models can be used to simulate erosion in vivo. However, in vitro models provide limited information on intra-oral erosion. In situ models allow the effect of an erosive challenge to be evaluated under intra-oral conditions and are currently the method of choice for short-term testing of low-erosive products or preventive therapeutic products. In the future, clinical trials will allow longer-term testing. Possible methodologies for such trials are discussed.
Resumo:
Among the cestodes, Echinococcus granulosus, Echinococcus multilocularis and Taenia solium represent the most dangerous parasites. Their larval stages cause the diseases cystic echinococcosis (CE), alveolar echinococcosis (AE) and cysticercosis, respectively, which exhibit considerable medical and veterinary health concerns with a profound economic impact. Others caused by other cestodes, such as species of the genera Mesocestoides and Hymenolepis, are relatively rare in humans. In this review, we will focus on E. granulosus and E. multilocularis metacestode laboratory models and will review the use of these models in the search for novel drugs that could be employed for chemotherapeutic treatment of echinococcosis. Clearly, improved therapeutic drugs are needed for the treatment of AE and CE, and this can only be achieved through the development of medium-to-high throughput screening approaches. The most recent achievements in the in vitro culture and genetic manipulation of E. multilocularis cells and metacestodes, and the accessability of the E. multilocularis genome and EST sequence information, have rendered the E. multilocularis model uniquely suited for studies on drug-efficacy and drug target identification. This could lead to the development of novel compounds for the use in chemotherapy against echinococcosis, and possibly against diseases caused by other cestodes, and potentially also trematodes.
Resumo:
Outcome-dependent, two-phase sampling designs can dramatically reduce the costs of observational studies by judicious selection of the most informative subjects for purposes of detailed covariate measurement. Here we derive asymptotic information bounds and the form of the efficient score and influence functions for the semiparametric regression models studied by Lawless, Kalbfleisch, and Wild (1999) under two-phase sampling designs. We show that the maximum likelihood estimators for both the parametric and nonparametric parts of the model are asymptotically normal and efficient. The efficient influence function for the parametric part aggress with the more general information bound calculations of Robins, Hsieh, and Newey (1995). By verifying the conditions of Murphy and Van der Vaart (2000) for a least favorable parametric submodel, we provide asymptotic justification for statistical inference based on profile likelihood.
Resumo:
Marginal generalized linear models can be used for clustered and longitudinal data by fitting a model as if the data were independent and using an empirical estimator of parameter standard errors. We extend this approach to data where the number of observations correlated with a given one grows with sample size and show that parameter estimates are consistent and asymptotically Normal with a slower convergence rate than for independent data, and that an information sandwich variance estimator is consistent. We present two problems that motivated this work, the modelling of patterns of HIV genetic variation and the behavior of clustered data estimators when clusters are large.
Resumo:
Jewell and Kalbfleisch (1992) consider the use of marker processes for applications related to estimation of the survival distribution of time to failure. Marker processes were assumed to be stochastic processes that, at a given point in time, provide information about the current hazard and consequently on the remaining time to failure. Particular attention was paid to calculations based on a simple additive model for the relationship between the hazard function at time t and the history of the marker process up until time t. Specific applications to the analysis of AIDS data included the use of markers as surrogate responses for onset of AIDS with censored data and as predictors of the time elapsed since infection in prevalent individuals. Here we review recent work on the use of marker data to tackle these kinds of problems with AIDS data. The Poisson marker process with an additive model, introduced in Jewell and Kalbfleisch (1992) may be a useful "test" example for comparison of various procedures.
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.
Resumo:
In the simultaneous estimation of a large number of related quantities, multilevel models provide a formal mechanism for efficiently making use of the ensemble of information for deriving individual estimates. In this article we investigate the ability of the likelihood to identify the relationship between signal and noise in multilevel linear mixed models. Specifically, we consider the ability of the likelihood to diagnose conjugacy or independence between the signals and noises. Our work was motivated by the analysis of data from high-throughput experiments in genomics. The proposed model leads to a more flexible family. However, we further demonstrate that adequately capitalizing on the benefits of a well fitting fully-specified likelihood in the terms of gene ranking is difficult.
Resumo:
Monte Carlo (code GEANT) produced 6 and 15 MV phase space (PS) data were used to define several simple photon beam models. For creating the PS data the energy of starting electrons hitting the target was tuned to get correct depth dose data compared to measurements. The modeling process used the full PS information within the geometrical boundaries of the beam including all scattered radiation of the accelerator head. Scattered radiation outside the boundaries was neglected. Photons and electrons were assumed to be radiated from point sources. Four different models were investigated which involved different ways to determine the energies and locations of beam particles in the output plane. Depth dose curves, profiles, and relative output factors were calculated with these models for six field sizes from 5x5 to 40x40cm2 and compared to measurements. Model 1 uses a photon energy spectrum independent of location in the PS plane and a constant photon fluence in this plane. Model 2 takes into account the spatial particle fluence distribution in the PS plane. A constant fluence is used again in model 3, but the photon energy spectrum depends upon the off axis position. Model 4, finally uses the spatial particle fluence distribution and off axis dependent photon energy spectra in the PS plane. Depth dose curves and profiles for field sizes up to 10x10cm2 were not model sensitive. Good agreement between measured and calculated depth dose curves and profiles for all field sizes was reached for model 4. However, increasing deviations were found for increasing field sizes for models 1-3. Large deviations resulted for the profiles of models 2 and 3. This is due to the fact that these models overestimate and underestimate the energy fluence at large off axis distances. Relative output factors consistent with measurements resulted only for model 4.
Resumo:
Human experimental pain models require standardized stimulation and quantitative assessment of the evoked responses. This approach can be applied to healthy volunteers and pain patients before and after pharmacological interventions. Standardized stimuli of different modalities (ie, mechanical, chemical, thermal or electrical) can be applied to the skin, muscles and viscera for a differentiated and comprehensive assessment of various pain pathways and mechanisms. Using a multi-modal, multi-tissue approach, new and existing analgesic drugs can be profiled by their modulation of specific biomarkers. It has been shown that biomarkers, for example, those related to the central integration of repetitive nociceptive stimuli, can predict efficacy of a given drug in neuropathic pain conditions. Human experimental pain models can bridge animal and clinical pain research, and act as translational research providing new possibilities for designing successful clinical trials. Proof-of-concept studies provide cheap, fast and reliable information on dose-efficacy relationships and how pain sensed in the skin, muscles and viscera are inhibited.
Resumo:
Invasive exotic plants have altered natural ecosystems across much of North America. In the Midwest, the presence of invasive plants is increasing rapidly, causing changes in ecosystem patterns and processes. Early detection has become a key component in invasive plant management and in the detection of ecosystem change. Risk assessment through predictive modeling has been a useful resource for monitoring and assisting with treatment decisions for invasive plants. Predictive models were developed to assist with early detection of ten target invasive plants in the Great Lakes Network of the National Park Service and for garlic mustard throughout the Upper Peninsula of Michigan. These multi-criteria risk models utilize geographic information system (GIS) data to predict the areas at highest risk for three phases of invasion: introduction, establishment, and spread. An accuracy assessment of the models for the ten target plants in the Great Lakes Network showed an average overall accuracy of 86.3%. The model developed for garlic mustard in the Upper Peninsula resulted in an accuracy of 99.0%. Used as one of many resources, the risk maps created from the model outputs will assist with the detection of ecosystem change, the monitoring of plant invasions, and the management of invasive plants through prioritized control efforts.