904 resultados para Network Analysis Methods
Resumo:
Different socio-economic and environmental drivers lead local communities in mountain regions to adapt land use practices and engage in protection policies. The political system also has to develop new approaches to adapt to those drivers. Local actors are the target group of those policy approaches, and the question arises of if and how much those actors are consulted or even integrated into the design of local land use and protection policies. This article addresses this question by comparing seven different case studies in Swiss mountain regions. Through a formal social network analysis, the inclusion of local actors in collaborative policy networks is investigated and compared to the involvement of other stakeholders representing the next higher sub-national or national decisional levels. Results show that there is a significant difference (1) in how local actors are embedded compared to other stakeholders; and (2) between top-down versus bottom-up designed policy processes.
Resumo:
Climate adaptation policies increasingly incorporate sustainability principles into their design and implementation. Since successful adaptation by means of adaptive capacity is recognized as being dependent upon progress toward sustainable development, policy design is increasingly characterized by the inclusion of state and non-state actors (horizontal actor integration), cross-sectoral collaboration, and inter-generational planning perspectives. Comparing four case studies in Swiss mountain regions, three located in the Upper Rhone region and one case from western Switzerland, we investigate how sustainability is put into practice. We argue that collaboration networks and sustainability perceptions matter when assessing the implementation of sustainability in local climate change adaptation. In other words, we suggest that adaptation is successful where sustainability perceptions translate into cross-sectoral integration and collaboration on the ground. Data about perceptions and network relations are assessed through surveys and treated via cluster and social network analysis.
Resumo:
Increasing antibiotic resistance among uropathogenic Escherichia coli (UPEC) is driving interest in therapeutic targeting of nonconserved virulence factor (VF) genes. The ability to formulate efficacious combinations of antivirulence agents requires an improved understanding of how UPEC deploy these genes. To identify clinically relevant VF combinations, we applied contemporary network analysis and biclustering algorithms to VF profiles from a large, previously characterized inpatient clinical cohort. These mathematical approaches identified four stereotypical VF combinations with distinctive relationships to antibiotic resistance and patient sex that are independent of traditional phylogenetic grouping. Targeting resistance- or sex-associated VFs based upon these contemporary mathematical approaches may facilitate individualized anti-infective therapies and identify synergistic VF combinations in bacterial pathogens.
Resumo:
Information on possible resource value of sea floor manganese nodule deposits in the eastern north Pacific has been obtained by a study of records and collections of the 1972 Sea Scope Expedition.
Resumo:
Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.
Resumo:
Ejercicio de Análisis de Redes con Network Analysis de ArcGIS 10
Resumo:
Cover title.
Resumo:
In Australia more than 300 vertebrates, including 43 insectivorous bat species, depend on hollows in habitat trees for shelter, with many species using a network of multiple trees as roosts, We used roost-switching data on white-striped freetail bats (Tadarida australis; Microchiroptera: Molossidae) to construct a network representation of day roosts in suburban Brisbane, Australia. Bats were caught from a communal roost tree with a roosting group of several hundred individuals and released with transmitters. Each roost used by the bats represented a node in the network, and the movements of bats between roosts formed the links between nodes. Despite differences in gender and reproductive stages, the bats exhibited the same behavior throughout three radiotelemetry periods and over 500 bat days of radio tracking: each roosted in separate roosts, switched roosts very infrequently, and associated with other bats only at the communal roost This network resembled a scale-free network in which the distribution of the number of links from each roost followed a power law. Despite being spread over a large geographic area (> 200 km(2)), each roost was connected to others by less than three links. One roost (the hub or communal roost) defined the architecture of the network because it had the most links. That the network showed scale-free properties has profound implications for the management of the habitat trees of this roosting group. Scale-free networks provide high tolerance against stochastic events such as random roost removals but are susceptible to the selective removal of hub nodes. Network analysis is a useful tool for understanding the structural organization of habitat tree usage and allows the informed judgment of the relative importance of individual trees and hence the derivation of appropriate management decisions, Conservation planners and managers should emphasize the differential importance of habitat trees and think of them as being analogous to vital service centers in human societies.
Resumo:
This article explains first, the reasons why a knowledge of statistics is necessary and describes the role that statistics plays in an experimental investigation. Second, the normal distribution is introduced which describes the natural variability shown by many measurements in optometry and vision sciences. Third, the application of the normal distribution to some common statistical problems including how to determine whether an individual observation is a typical member of a population and how to determine the confidence interval for a sample mean is described.
Resumo:
In this second article, statistical ideas are extended to the problem of testing whether there is a true difference between two samples of measurements. First, it will be shown that the difference between the means of two samples comes from a population of such differences which is normally distributed. Second, the 't' distribution, one of the most important in statistics, will be applied to a test of the difference between two means using a simple data set drawn from a clinical experiment in optometry. Third, in making a t-test, a statistical judgement is made as to whether there is a significant difference between the means of two samples. Before the widespread use of statistical software, this judgement was made with reference to a statistical table. Even if such tables are not used, it is useful to understand their logical structure and how to use them. Finally, the analysis of data, which are known to depart significantly from the normal distribution, will be described.
Resumo:
In some studies, the data are not measurements but comprise counts or frequencies of particular events. In such cases, an investigator may be interested in whether one specific event happens more frequently than another or whether an event occurs with a frequency predicted by a scientific model.
Resumo:
In any investigation in optometry involving more that two treatment or patient groups, an investigator should be using ANOVA to analyse the results assuming that the data conform reasonably well to the assumptions of the analysis. Ideally, specific null hypotheses should be built into the experiment from the start so that the treatments variation can be partitioned to test these effects directly. If 'post-hoc' tests are used, then an experimenter should examine the degree of protection offered by the test against the possibilities of making either a type 1 or a type 2 error. All experimenters should be aware of the complexity of ANOVA. The present article describes only one common form of the analysis, viz., that which applies to a single classification of the treatments in a randomised design. There are many different forms of the analysis each of which is appropriate to the analysis of a specific experimental design. The uses of some of the most common forms of ANOVA in optometry have been described in a further article. If in any doubt, an investigator should consult a statistician with experience of the analysis of experiments in optometry since once embarked upon an experiment with an unsuitable design, there may be little that a statistician can do to help.
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.