982 resultados para Causal networks methodology
Resumo:
Recently graph theory and complex networks have been widely used as a mean to model functionality of the brain. Among different neuroimaging techniques available for constructing the brain functional networks, electroencephalography (EEG) with its high temporal resolution is a useful instrument of the analysis of functional interdependencies between different brain regions. Alzheimer's disease (AD) is a neurodegenerative disease, which leads to substantial cognitive decline, and eventually, dementia in aged people. To achieve a deeper insight into the behavior of functional cerebral networks in AD, here we study their synchronizability in 17 newly diagnosed AD patients compared to 17 healthy control subjects at no-task, eyes-closed condition. The cross-correlation of artifact-free EEGs was used to construct brain functional networks. The extracted networks were then tested for their synchronization properties by calculating the eigenratio of the Laplacian matrix of the connection graph, i.e., the largest eigenvalue divided by the second smallest one. In AD patients, we found an increase in the eigenratio, i.e., a decrease in the synchronizability of brain networks across delta, alpha, beta, and gamma EEG frequencies within the wide range of network costs. The finding indicates the destruction of functional brain networks in early AD.
Resumo:
En este artículo abordamos el uso y la importancia de las herramientas estadísticas que se utilizan principalmente en los estudios médicos del ámbito de la oncología y la hematología, pero aplicables a muchos otros campos tanto médicos como experimentales o industriales. El objetivo del presente trabajo es presentar de una manera clara y precisa la metodología estadística necesaria para analizar los datos obtenidos en los estudios rigurosa y concisamente en cuanto a las hipótesis de trabajo planteadas por los investigadores. La medida de la respuesta al tratamiento elegidas en al tipo de estudio elegido determinarán los métodos estadísticos que se utilizarán durante el análisis de los datos del estudio y también el tamaño de muestra. Mediante la correcta aplicación del análisis estadístico y de una adecuada planificación se puede determinar si la relación encontrada entre la exposición a un tratamiento y un resultado es casual o por el contrario, está sujeto a una relación no aleatoria que podría establecer una relación de causalidad. Hemos estudiado los principales tipos de diseño de los estudios médicos más utilizados, tales como ensayos clínicos y estudios observacionales (cohortes, casos y controles, estudios de prevalencia y estudios ecológicos). También se presenta una sección sobre el cálculo del tamaño muestral de los estudios y cómo calcularlo, ¿Qué prueba estadística debe utilizarse?, los aspectos sobre fuerza del efecto ¿odds ratio¿ (OR) y riesgo relativo (RR), el análisis de supervivencia. Se presentan ejemplos en la mayoría de secciones del artículo y bibliografía más relevante.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
Two trends which presently exist in relation to the concept of Paleontology are analyzed, pointing out some of the aspects which negative influence. Various reflections are made based on examples of some of the principal points of paleontological method, such as the influence of a punctual sampling, the meaning of size-frequency distribution and subjectivity in the identification of fossils. Topics which have a marked repercussion in diverse aspects of Paleontology are discussed.
Resumo:
Computational network analysis provides new methods to analyze the brain's structural organization based on diffusion imaging tractography data. Networks are characterized by global and local metrics that have recently given promising insights into diagnosis and the further understanding of psychiatric and neurologic disorders. Most of these metrics are based on the idea that information in a network flows along the shortest paths. In contrast to this notion, communicability is a broader measure of connectivity which assumes that information could flow along all possible paths between two nodes. In our work, the features of network metrics related to communicability were explored for the first time in the healthy structural brain network. In addition, the sensitivity of such metrics was analysed using simulated lesions to specific nodes and network connections. Results showed advantages of communicability over conventional metrics in detecting densely connected nodes as well as subsets of nodes vulnerable to lesions. In addition, communicability centrality was shown to be widely affected by the lesions and the changes were negatively correlated with the distance from lesion site. In summary, our analysis suggests that communicability metrics that may provide an insight into the integrative properties of the structural brain network and that these metrics may be useful for the analysis of brain networks in the presence of lesions. Nevertheless, the interpretation of communicability is not straightforward; hence these metrics should be used as a supplement to the more standard connectivity network metrics.
Resumo:
This paper reports on the purpose, design, methodology and target audience of E-learning courses in forensic interpretation offered by the authors since 2010, including practical experiences made throughout the implementation period of this project. This initiative was motivated by the fact that reporting results of forensic examinations in a logically correct and scientifically rigorous way is a daily challenge for any forensic practitioner. Indeed, interpretation of raw data and communication of findings in both written and oral statements are topics where knowledge and applied skills are needed. Although most forensic scientists hold educational records in traditional sciences, only few actually followed full courses that focussed on interpretation issues. Such courses should include foundational principles and methodology - including elements of forensic statistics - for the evaluation of forensic data in a way that is tailored to meet the needs of the criminal justice system. In order to help bridge this gap, the authors' initiative seeks to offer educational opportunities that allow practitioners to acquire knowledge and competence in the current approaches to the evaluation and interpretation of forensic findings. These cover, among other aspects, probabilistic reasoning (including Bayesian networks and other methods of forensic statistics, tools and software), case pre-assessment, skills in the oral and written communication of uncertainty, and the development of independence and self-confidence to solve practical inference problems. E-learning was chosen as a general format because it helps to form a trans-institutional online-community of practitioners from varying forensic disciplines and workfield experience such as reporting officers, (chief) scientists, forensic coordinators, but also lawyers who all can interact directly from their personal workplaces without consideration of distances, travel expenses or time schedules. In the authors' experience, the proposed learning initiative supports participants in developing their expertise and skills in forensic interpretation, but also offers an opportunity for the associated institutions and the forensic community to reinforce the development of a harmonized view with regard to interpretation across forensic disciplines, laboratories and judicial systems.
Resumo:
Empirical studies indicate that the transition to parenthood is influenced by an individual's peer group. To study the mechanisms creating interdepen- dencies across individuals' transition to parenthood and its timing we apply an agent-based simulation model. We build a one-sex model and provide agents with three different characteristics regarding age, intended education and parity. Agents endogenously form their network based on social closeness. Network members then may influence the agents' transition to higher parity levels. Our numerical simulations indicate that accounting for social inter- actions can explain the shift of first-birth probabilities in Austria over the period 1984 to 2004. Moreover, we apply our model to forecast age-specific fertility rates up to 2016.
Resumo:
Synthetic root exudates were formulated based on the organic acid composition of root exudates derived from the rhizosphere of aseptically grown corn plants, pH of the rhizosphere, and the background chemical matrices of the soil solutions. The synthetic root exudates, which mimic the chemical conditions of the rhizosphere environment where soil-borne metals are dissolved and absorbed by plants, were used to extract metals from sewage-sludge treated soils 16 successive times. The concentrations of Zn, Cd, Ni, Cr, and Cu of the sludge-treated soil were 71.74, 0.21, 15.90, 58.12, and 37.44 mg kg-1, respectively. The composition of synthetic root exudates consisted of acetic, butyric, glutaric, lactic, maleic, propionic, pyruvic, succinic, tartaric, and valeric acids. The organic acid mixtures had concentrations of 0.05 and 0.1 mol L-1 -COOH. The trace elements removed by successive extractions may be considered representative for the availability of these metals to plants in these soils. The chemical speciation of the metals in the liquid phase was calculated; results showed that metals in sludge-treated soils were dissolved and formed soluble complexes with the different organic acid-based root exudates. The most reactive organic acid ligands were lactate, maleate, tartarate, and acetate. The inorganic ligands of chloride and sulfate played insignificant roles in metal dissolution. Except for Cd, free ions did not represent an important chemical species of the metals in the soil rhizosphere. As different metals formed soluble complexes with different ligands in the rhizosphere, no extractor, based on a single reagent would be able to recover all of the potentially plant-available metals from soils; the root exudate-derived organic acid mixtures tested in this study may be better suited to recover potentially plant-available metals from soils than the conventional extractors.
Resumo:
This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.
Resumo:
Vibration-based damage identification (VBDI) techniques have been developed in part to address the problems associated with an aging civil infrastructure. To assess the potential of VBDI as it applies to highway bridges in Iowa, three applications of VBDI techniques were considered in this study: numerical simulation, laboratory structures, and field structures. VBDI techniques were found to be highly capable of locating and quantifying damage in numerical simulations. These same techniques were found to be accurate in locating various types of damage in a laboratory setting with actual structures. Although there is the potential for these techniques to quantify damage in a laboratory setting, the ability of the methods to quantify low-level damage in the laboratory is not robust. When applying these techniques to an actual bridge, it was found that some traditional applications of VBDI methods are capable of describing the global behavior of the structure but are most likely not suited for the identification of typical damage scenarios found in civil infrastructure. Measurement noise, boundary conditions, complications due to substructures and multiple material types, and transducer sensitivity make it very difficult for present VBDI techniques to identify, much less quantify, highly localized damage (such as small cracks and minor changes in thickness). However, while investigating VBDI techniques in the field, it was found that if the frequency-domain response of the structure can be generated from operating traffic load, the structural response can be animated and used to develop a holistic view of the bridge’s response to various automobile loadings. By animating the response of a field bridge, concrete cracking (in the abutment and deck) was correlated with structural motion and problem frequencies (i.e., those that cause significant torsion or tension-compression at beam ends) were identified. Furthermore, a frequency-domain study of operational traffic was used to identify both common and extreme frequencies for a given structure and loading. Common traffic frequencies can be compared to problem frequencies so that cost-effective, preventative solutions (either structural or usage-based) can be developed for a wide range of IDOT bridges. Further work should (1) perfect the process of collecting high-quality operational frequency response data; (2) expand and simplify the process of correlating frequency response animations with damage; and (3) develop efficient, economical, preemptive solutions to common damage types.
Resumo:
Summary
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.
Resumo:
Soil surveys are the main source of spatial information on soils and have a range of different applications, mainly in agriculture. The continuity of this activity has however been severely compromised, mainly due to a lack of governmental funding. The purpose of this study was to evaluate the feasibility of two different classifiers (artificial neural networks and a maximum likelihood algorithm) in the prediction of soil classes in the northwest of the state of Rio de Janeiro. Terrain attributes such as elevation, slope, aspect, plan curvature and compound topographic index (CTI) and indices of clay minerals, iron oxide and Normalized Difference Vegetation Index (NDVI), derived from Landsat 7 ETM+ sensor imagery, were used as discriminating variables. The two classifiers were trained and validated for each soil class using 300 and 150 samples respectively, representing the characteristics of these classes in terms of the discriminating variables. According to the statistical tests, the accuracy of the classifier based on artificial neural networks (ANNs) was greater than of the classic Maximum Likelihood Classifier (MLC). Comparing the results with 126 points of reference showed that the resulting ANN map (73.81 %) was superior to the MLC map (57.94 %). The main errors when using the two classifiers were caused by: a) the geological heterogeneity of the area coupled with problems related to the geological map; b) the depth of lithic contact and/or rock exposure, and c) problems with the environmental correlation model used due to the polygenetic nature of the soils. This study confirms that the use of terrain attributes together with remote sensing data by an ANN approach can be a tool to facilitate soil mapping in Brazil, primarily due to the availability of low-cost remote sensing data and the ease by which terrain attributes can be obtained.