943 resultados para Sparse matrices
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
In this paper we simulate and analyse the economic impact that sectorial productivity gains have on two regional Spanish economies (Catalonia and Extremadura). In particular we study the quantitative effect that each sector’s productivity gain has on household welfare (real disposable income and equivalent variation), on the consumption price indices and factor relative prices, on real production (GDP) and on the government’s net income (net taxation revenues of social transfers to households). The analytical approach consists of a computable general equilibrium model, in which we assume perfect competition and cleared markets, including factor markets. All the parameters and exogenous variables of the model are calibrated by means of two social accounting matrices, one for each region under study. The results allow us to identify those sectors with the greatest impact on consumer welfare as the key sectors in the regional economies. Keywords: Productivity gains, key sectors, computable general equilibrium
Resumo:
Depuis quelques années, la spectrométrie de masse en tandem (MS/MS) ne cesse de gagner du terrain comme méthode d'analyse en toxicologie forensique, notamment pour le dosage des cannabinoïdes. Couplée à la chromatographie liquide (LC) ou gazeuse (GC), elle permet l'identification fiable et le dosage rapide du THC, de son précurseur acide, et de ses principaux métabolites, y compris les glucuronides. Au cours de ces dix dernières années, un nombre significatif de publications sont parues sur ce sujet. L'objectif de cet article est de passer en revue les analyses par spectrométrie de masse en tandem des cannabinoïdes dans diverses matrices biologiques. In recent years, tandem mass spectrometry (MS/MS) is gaining ground as a reference method of analysis in clinical and forensic toxicology, especially for the determination of cannabinoids. Coupled to liquid chromatography (LC) or gas chromatography (GC), it allows the definitive identification and rapid determination of THC, its acid precursor, and its major metabolites, including the glucuronides. During the past decade, several methods of analysis of cannabinoids in different matrices have appeared on this subject. The aim of this paper is to review the analysis of cannabinoids by tandem mass spectrometry methods in various biological matrices
Resumo:
Long-term observations of individuals with the so-called Langer-Giedion (LGS) or tricho-rhino-phalangeal type II (TRPS2) are scarce. We report here a on follow-up of four LGS individuals, including one first described by Andres Giedion in 1969, and review the sparse publications on adults with this syndrome which comprises ectodermal dysplasia, multiple cone-shaped epiphyses prior to puberty, multiple cartilaginous exostoses, and mostly mild intellectual impairment. LGS is caused by deletion of the chromosomal segment 8q24.11-q24.13 containing among others the genes EXT1 and TRPS1. Most patients with TRPS2 are only borderline or mildly cognitively delayed, and few are of normal intelligence. Their practical skills are better than their intellectual capability, and, for this reason and because of their low self-esteem, they are often underestimated. Some patients develop seizures at variable age. Osteomas on processes of cervical vertebrae may cause pressure on cervical nerves or dissection of cerebral arteries. Joint stiffness is observed during childhood and changes later to joint laxity causing instability and proneness to trauma. Perthes disease is not rare. Almost all males become bald at or soon after puberty, and some develop (pseudo) gynecomastia. Growth hormone deficiency was found in a few patients, TSH deficiency so far only in one. Puberty and fertility are diminished, and no instance of transmission of the deletion from a non-mosaic parent to a child has been observed so far. Several affected females had vaginal atresia with consequent hydrometrocolpos.
Resumo:
It is generally accepted that most plant populations are locally adapted. Yet, understanding how environmental forces give rise to adaptive genetic variation is a challenge in conservation genetics and crucial to the preservation of species under rapidly changing climatic conditions. Environmental variation, phylogeographic history, and population demographic processes all contribute to spatially structured genetic variation, however few current models attempt to separate these confounding effects. To illustrate the benefits of using a spatially-explicit model for identifying potentially adaptive loci, we compared outlier locus detection methods with a recently-developed landscape genetic approach. We analyzed 157 loci from samples of the alpine herb Gentiana nivalis collected across the European Alps. Principle coordinates of neighbor matrices (PCNM), eigenvectors that quantify multi-scale spatial variation present in a data set, were incorporated into a landscape genetic approach relating AFLP frequencies with 23 environmental variables. Four major findings emerged. 1) Fifteen loci were significantly correlated with at least one predictor variable (R (adj) (2) > 0.5). 2) Models including PCNM variables identified eight more potentially adaptive loci than models run without spatial variables. 3) When compared to outlier detection methods, the landscape genetic approach detected four of the same loci plus 11 additional loci. 4) Temperature, precipitation, and solar radiation were the three major environmental factors driving potentially adaptive genetic variation in G. nivalis. Techniques presented in this paper offer an efficient method for identifying potentially adaptive genetic variation and associated environmental forces of selection, providing an important step forward for the conservation of non-model species under global change.
Resumo:
Noonan syndrome (NS) and cardio-facio-cutaneous (CFC) syndrome are autosomal dominant disorders characterized by heart defects, facial dysmorphism, ectodermal abnormalities, and mental retardation. There is a significant clinical overlap between NS and CFC syndrome, but ectodermal abnormalities and mental retardation are more frequent in CFC syndrome. Mutations in PTPN11 and KRAS have been identified in patients with NS and those in KRAS, BRAF and MAP2K1/2 have been identified in patients with CFC syndrome, establishing a new role of the RAS/MAPK pathway in human development. Recently, mutations in the son of sevenless gene (SOS1) have also been identified in patients with NS. To clarify the clinical spectrum of patients with SOS1 mutations, we analyzed 24 patients with NS, including 3 patients in a three-generation family, and 30 patients with CFC syndrome without PTPN11, KRAS, HRAS, BRAF, and MAP2K1/2 (MEK1/2) mutations. We identified two SOS1 mutations in four NS patients, including three patients in the above-mentioned three-generation family. In the patients with a CFC phenotype, three mutations, including a novel three amino-acid insertion, were identified in one CFC patient and two patients with both NS and CFC phenotypes. These three patients exhibited ectodermal abnormalities, such as curly hair, sparse eyebrows, and dry skin, and two of them showed mental retardation. Our results suggest that patients with SOS1 mutations range from NS to CFC syndrome.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
Prenatal heart valve interventions aiming at the early and systematic correction of congenital cardiac malformations represent a promising treatment option in maternal-fetal care. However, definite fetal valve replacements require growing implants adaptive to fetal and postnatal development. The presented study investigates the fetal implantation of prenatally engineered living autologous cell-based heart valves. Autologous amniotic fluid cells (AFCs) were isolated from pregnant sheep between 122 and 128 days of gestation via transuterine sonographic sampling. Stented trileaflet heart valves were fabricated from biodegradable PGA-P4HB composite matrices (n = 9) and seeded with AFCs in vitro. Within the same intervention, tissue engineered heart valves (TEHVs) and unseeded controls were implanted orthotopically into the pulmonary position using an in-utero closed-heart hybrid approach. The transapical valve deployments were successful in all animals with acute survival of 77.8% of fetuses. TEHV in-vivo functionality was assessed using echocardiography as well as angiography. Fetuses were harvested up to 1 week after implantation representing a birth-relevant gestational age. TEHVs showed in vivo functionality with intact valvular integrity and absence of thrombus formation. The presented approach may serve as an experimental basis for future human prenatal cardiac interventions using fully biodegradable autologous cell-based living materials.
Resumo:
Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Q(st)-F(st)) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2F(st)/(1 - F(st))G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2F(st)/(1 - F(st))] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Q(st)-F(st) comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
Inequalities in the physical and psychological health of the first- and second-generation Irish subjects have been well documented. Despite the fact that the Irish alcohol misuser is subject to a number of unhelpful stereotypes, the research concerning alcohol misuse in the Irish is surprisingly sparse. What little exists indicates that Irish alcohol misusers tend to fit the profile of the "chronic alcoholic." Specifically, they tend to be older (45 years +) and to have impaired physical and psychological health. Not surprisingly this is accompanied by poor longitudinal outcomes. Furthermore, alcohol problems worsen as a result of migration (this phenomenon is not restricted to the UK). Alcohol and drug services are now frequently merged, and policy is directed towards the visible young illicit drug user. This paper argues that inadvertently Irish alcohol misusers are discriminated against as a result. Future avenues of research are outlined to provide services and policy makers with data to plan services taking full account of the needs of Irish alcohol misusers.This resource was contributed by The National Documentation Centre on Drug Use.
Resumo:
Introduction: In the middle of the 90's, the discovery of endogenous ligands for cannabinoid receptors opened a new era in this research field. Amides and esters of arachidonic acid have been identified as these endogenous ligands. Arachidonoylethanolamide (anandamide or AEA) and 2-Arachidonoylglycerol (2-AG) seem to be the most important of these lipid messengers. In addition, virodhamine (VA), noladin ether (2-AGE), and N-arachidonoyl dopamine (NADA) have been shown to bind to CB receptors with varying affinities. During recent years, it has become more evident that the EC system is part of fundamental regulatory mechanisms in many physiological processes such as stress and anxiety responses, depression, anorexia and bulimia, schizophrenia disorders, neuroprotection, Parkinson disease, anti-proliferative effects on cancer cells, drug addiction, and atherosclerosis. Aims: This work presents the problematic of EC analysis and the input of Information Dependant Acquisition based on hybrid triple quadrupole linear ion trap (QqQLIT) system for the profiling of these lipid mediators. Methods: The method was developed on a LC Ultimate 3000 series (Dionex, Sunnyvale, CA, USA) coupled to a QTrap 4000 system (Applied biosystems, Concord, ON, Canada). The ECs were separated on an XTerra C18 MS column (50 × 3.0 mm i.d., 3.5 μm) with a 5 min gradient elution. For confirmatory analysis, an information-dependant acquisition experiment was performed with selected reaction monitoring (SRM) as survey scan and enhanced produced ion (EPI) as dependant scan. Results: The assay was found to be linear in the concentration range of 0.1-5 ng/mL for AEA, 0.3-5 ng/mL for VA, 2-AGE, and NADA and 1-20 ng/mL for 2-AG using 0.5 mL of plasma. Repeatability and intermediate precision were found less than 15% over the tested concentration ranges. Under non-pathophysiological conditions, only AEA and 2-AG were actually detected in plasma with concentration ranges going from 104 to 537 pg/mL and from 2160 to 3990 pg/mL respectively. We have particularly focused our scopes on the evaluation of EC level changes in biological matrices through drug addiction and atherosclerosis processes. We will present preliminary data obtained during pilot study after administration of cannabis on human patients. Conclusion: ECs have been shown to play a key role in regulation of many pathophysiological processes. Medical research in these different fields continues to growth in order to understand and to highlight the predominant role of EC in the CNS and peripheral tissues signalisation. The profiling of these lipids needs to develop rapid, highly sensitive and selective analytical methods.
Resumo:
Introduction : DTI has proven to be an exquisite biomarker of tissue microstructure integrity. This technique has been successfully applied to schizophrenia in showing that fractional anisotropy (FA, a marker of white matter integrity) is diminished in several areas of the brain (Kyriakopoulos M et al (2008)). New ways of representing diffusion data emerged recently and achieved to create structural connectivity maps in healthy brains (Hagmann P et al. (2008)). These maps have the capacity to study alterations over the entire brain at the connection and network level. This is of high interest in complex disconnection diseases like schizophrenia. We report on the specific network alterations of schizophrenic patients. Methods : 13 patients with chronic schizophrenia were recruited from in-patient, day treatment, out-patient clinics. Comparison subjects were recruited and group-matched to patients on age, sex, handedness, and parental social economic-status. This study was approved by the local IRB and subjects had to give informed written consent. They were scanned with a 3T clinical MRI scanner. DTI and high-resolution anatomical T1w imaging were performed during the same session. The path from diffusion MRI to a multi-resolution structural connection matrices of the entire brain is a five steps process that was performed in a similar way as described in Hagmann P et al. (2008). (1) DTI and T1w MRI of the brain, (2) segmentation of white and gray matter, (3) white matter tractography, (4) segmentation of the cortex into 242 ROIs of equal surface area covering the entire cortex (Fig 1), (5) the connection network was constructed by measuring for each ROI to ROI connection the related average FA along the corresponding tract. Results : For every connection between 2 ROIs of the network we tested the hypothesis H0: "average FA along fiber pathway is larger or equal in patients than in controls". H0 was rejected for connections where average FA in a connection was significantly lower in patients than in controls. Threshold p-value was 0.01 corrected for multiple comparisons with false discovery rate. We identified consistently that temporal, occipito-temporal, precuneo-temporal as well as frontal inferior and precuneo-cingulate connections were altered (Fig 2: significant connections in yellow). This is in agreement with the known literature, which showed across several studies that FA is diminished in several areas of the brain. More precisely, abnormalities were reported in the prefrontal and temporal white matter and to some extent also in the parietal and occipital regions. The alterations reported in the literature specifically included the corpus callosum, the arcuate fasciculus and the cingulum bundle, which was the case here as well. In addition small world indexes are significantly reduced in patients (p<0.01) (Fig. 3). Conclusions : Using connectome mapping to characterize differences in structural connectivity between healthy and diseased subjects we were able to show widespread connectional alterations in schizophrenia patients and systematic small worldness decrease, which is a marker of network desorganization. More generally, we described a method that has the capacity to sensitively identify structure alterations in complex disconnection syndromes where lesions are widespread throughout the connectional network.
Resumo:
We investigated the role of the number of loci coding for a neutral trait on the release of additive variance for this trait after population bottlenecks. Different bottleneck sizes and durations were tested for various matrices of genotypic values, with initial conditions covering the allele frequency space. We used three different types of matrices. First, we extended Cheverud and Routman's model by defining matrices of "pure" epistasis for three and four independent loci; second, we used genotypic values drawn randomly from uniform, normal, and exponential distributions; and third we used two models of simple metabolic pathways leading to physiological epistasis. For all these matrices of genotypic values except the dominant metabolic pathway, we find that, as the number of loci increases from two to three and four, an increase in the release of additive variance is occurring. The amount of additive variance released for a given set of genotypic values is a function of the inbreeding coefficient, independently of the size and duration of the bottleneck. The level of inbreeding necessary to achieve maximum release in additive variance increases with the number of loci. We find that additive-by-additive epistasis is the type of epistasis most easily converted into additive variance. For a wide range of models, our results show that epistasis, rather than dominance, plays a significant role in the increase of additive variance following bottlenecks.