412 resultados para Complex samples


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Smut fungi are important pathogens of grasses, including the cultivated crops maize, sorghum and sugarcane. Typically, smut fungi infect the inflorescence of their host plants. Three genera of smut fungi (Ustilago, Sporisorium and Macalpinomyces) form a complex with overlapping morphological characters, making species placement problematic. For example, the newly described Macalpinomyces mackinlayi possesses a combination of morphological characters such that it cannot be unambiguously accommodated in any of the three genera. Previous attempts to define Ustilago, Sporisorium and Macalpinomyces using morphology and molecular phylogenetics have highlighted the polyphyletic nature of the genera, but have failed to produce a satisfactory taxonomic resolution. A detailed systematic study of 137 smut species in the Ustilago-Sporisorium- Macalpinomyces complex was completed in the current work. Morphological and DNA sequence data from five loci were assessed with maximum likelihood and Bayesian inference to reconstruct a phylogeny of the complex. The phylogenetic hypotheses generated were used to identify morphological synapomorphies, some of which had previously been dismissed as a useful way to delimit the complex. These synapomorphic characters are the basis for a revised taxonomic classification of the Ustilago-Sporisorium-Macalpinomyces complex, which takes into account their morphological diversity and coevolution with their grass hosts. The new classification is based on a redescription of the type genus Sporisorium, and the establishment of four genera, described from newly recognised monophyletic groups, to accommodate species expelled from Sporisorium. Over 150 taxonomic combinations have been proposed as an outcome of this investigation, which makes a rigorous and objective contribution to the fungal systematics of these important plant pathogens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling an environmental process involves creating a model structure and parameterising the model with appropriate values to accurately represent the process. Determining accurate parameter values for environmental systems can be challenging. Existing methods for parameter estimation typically make assumptions regarding the form of the Likelihood, and will often ignore any uncertainty around estimated values. This can be problematic, however, particularly in complex problems where Likelihoods may be intractable. In this paper we demonstrate an Approximate Bayesian Computational method for the estimation of parameters of a stochastic CA. We use as an example a CA constructed to simulate a range expansion such as might occur after a biological invasion, making parameter estimates using only count data such as could be gathered from field observations. We demonstrate ABC is a highly useful method for parameter estimation, with accurate estimates of parameters that are important for the management of invasive species such as the intrinsic rate of increase and the point in a landscape where a species has invaded. We also show that the method is capable of estimating the probability of long distance dispersal, a characteristic of biological invasions that is very influential in determining spread rates but has until now proved difficult to estimate accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Partition of heavy metals between particulate and dissolve fraction of stormwater primarily depends on the adsorption characteristics of solids particles. Moreover, the bioavailability of heavy metals is also influenced by the adsorption behaviour of solids. However, due to the lack of fundamental knowledge in relation to the heavy metals adsorption processes of road deposited solids, the effectiveness of stormwater management strategies can be limited. The research study focused on the investigation of the physical and chemical parameters of solids on urban road surfaces and, more specifically, on heavy metal adsorption to solids. Due to the complex nature of heavy metal interaction with solids, a substantial database was generated through a series of field investigations and laboratory experiments. The study sites for the build-up pollutant sample collection were selected from four urbanised suburbs located in a major river catchment. Sixteen road sites were selected from these suburbs and represented typical industrial, commercial and residential land uses. Build-up pollutants were collected using a wet and dry vacuum collection technique which was specially designed to improve fine particle collection. Roadside soil samples were also collected from each suburb for comparison with the road surface solids. The collected build-up solids samples were separated into four particle size ranges and tested for a range of physical and chemical parameters. The solids build-up on road surfaces contained a high fraction (70%) of particles smaller than 150ìm, which are favourable for heavy metal adsorption. These solids particles predominantly consist of soil derived minerals which included quartz, albite, microcline, muscovite and chlorite. Additionally, a high percentage of amorphous content was also identified in road deposited solids. In comparing the mineralogical data of surrounding soil and road deposited solids, it was found that about 30% of the solids consisted of particles generated from traffic related activities on road surfaces. Significant difference in mineralogical composition was noted in different particle sizes of build-up solids. Fine solids particles (<150ìm) consisted of a clayey matrix and high amorphous content (in the region of 40%) while coarse particles (>150ìm) consisted of a sandy matrix at all study sites, with about 60% quartz content. Due to these differences in mineralogical components, particles larger than and smaller than 150ìm had significant differences in their specific surface area (SSA) and effective cation exchange capacity (ECEC). These parameters, in turn, exert a significant influence on heavy metal adsorption. Consequently, heavy metal content in >150ìm particles was lower than in the case of fine particles. The particle size range <75ìm had the highest heavy metal content, corresponding with its high clay forming minerals, high organic matter and low quartz content which increased the SSA, ECEC and the presence of Fe, Al and Mn oxides. The clay forming minerals, high organic matter and Fe, Al and Mn oxides create distinct groups of charge sites on solids surfaces and exhibit different adsorption mechanisms and bond strength, between heavy metal elements and charge sites. Therefore, the predominance of these factors in different particle sizes leads to different heavy metal adsorption characteristics. Heavy metals show preference for association with clay forming minerals in fine solids particles, whilst in coarse particles heavy metals preferentially associate with organic matter. Although heavy metal adsorption to amorphous material is very low, the heavy metals embedded in traffic related materials have a potential impact on stormwater quality.Adsorption of heavy metals is not confined to an individual type of charge site in solids, whereas specific heavy metal elements show preference for adsorption to several different types of charge sites in solids. This is attributed to the dearth of preferred binding sites and the inability to reach the preferred binding sites due to competition between different heavy metal species. This confirms that heavy metal adsorption is significantly influenced by the physical and chemical parameters of solids that lead to a heterogeneity of surface charge sites. The research study highlighted the importance of removal of solids particles from stormwater runoff before they enter into receiving waters to reduce the potential risk posed by the bioavailability of heavy metals. The bioavailability of heavy metals not only results from the easily mobile fraction bound to the solids particles, but can also occur as a result of the dissolution of other forms of bonds by chemical changes in stormwater or microbial activity. Due to the diversity in the composition of the different particle sizes of solids and the characteristics and amount of charge sites on the particle surfaces, investigations using bulk solids are not adequate to gain an understanding of the heavy metal adsorption processes of solids particles. Therefore, the investigation of different particle size ranges is recommended for enhancing stormwater quality management practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The world’s increasing complexity, competitiveness, interconnectivity, and dependence on technology generate new challenges for nations and individuals that cannot be met by continuing education as usual (Katehi, Pearson, & Feder, 2009). With the proliferation of complex systems have come new technologies for communication, collaboration, and conceptualisation. These technologies have led to significant changes in the forms of mathematical and scientific thinking that are required beyond the classroom. Modelling, in its various forms, can develop and broaden children’s mathematical and scientific thinking beyond the standard curriculum. This paper first considers future competencies in the mathematical sciences within an increasingly complex world. Next, consideration is given to interdisciplinary problem solving and models and modelling. Examples of complex, interdisciplinary modelling activities across grades are presented, with data modelling in 1st grade, model-eliciting in 4th grade, and engineering-based modelling in 7th-9th grades.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The world’s increasing complexity, competitiveness, interconnectivity, and dependence on technology generate new challenges for nations and individuals that cannot be met by “continuing education as usual” (The National Academies, 2009). With the proliferation of complex systems have come new technologies for communication, collaboration, and conceptualization. These technologies have led to significant changes in the forms of mathematical thinking that are required beyond the classroom. This paper argues for the need to incorporate future-oriented understandings and competencies within the mathematics curriculum, through intellectually stimulating activities that draw upon multidisciplinary content and contexts. The paper also argues for greater recognition of children’s learning potential, as increasingly complex learners capable of dealing with cognitively demanding tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is unprecedented worldwide demand for mathematical solutions to complex problems. That demand has generated a further call to update mathematics education in a way that develops students’ abilities to deal with complex systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differential pulse stripping voltammetry method(DPSV) was applied to the determination of three herbicides, ametryn, cyanatryn, and dimethametryn. It was found that their voltammograms overlapped strongly, and it is difficult to determine these compounds individually from their mixtures. With the aid of chemometrics, classical least squares(CLS), principal component regression(PCR) and partial least squares(PLS), voltammogram resolution and quantitative analysis of the synthetic mixtures of the three compounds were successfully performed. The proposed method was also applied to the analysis of some real samples with satisfactory results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses exploratory research to identify the reported leadership challenges faced by leaders in the public sector in Australia and what specific leadership practices they engage in to deal with these challenges. Emerging is a sense that leadership in these complex work environments is not about controlling or mandating action but about engaging in conversation, building relationships and empowering staff to engage in innovative ways to solve complex problems. In addition leaders provide a strong sense of purpose and identity to guide behaviour and decisions to overcome being overwhelmed by the sheer volume of demands in a unpredictable and often unsupportive environment. Questions are raised as to the core competencies leaders need to develop to drive and underpin these leadership practices and the implications this has for the focus on future leadership development programmes. The possible direction of a future research programme will be put forward for further discussion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detecting query reformulations within a session by a Web searcher is an important area of research for designing more helpful searching systems and targeting content to particular users. Methods explored by other researchers include both qualitative (i.e., the use of human judges to manually analyze query patterns on usually small samples) and nondeterministic algorithms, typically using large amounts of training data to predict query modification during sessions. In this article, we explore three alternative methods for detection of session boundaries. All three methods are computationally straightforward and therefore easily implemented for detection of session changes. We examine 2,465,145 interactions from 534,507 users of Dogpile.com on May 6, 2005. We compare session analysis using (a) Internet Protocol address and cookie; (b) Internet Protocol address, cookie, and a temporal limit on intrasession interactions; and (c) Internet Protocol address, cookie, and query reformulation patterns. Overall, our analysis shows that defining sessions by query reformulation along with Internet Protocol address and cookie provides the best measure, resulting in an 82% increase in the count of sessions. Regardless of the method used, the mean session length was fewer than three queries, and the mean session duration was less than 30 min. Searchers most often modified their query by changing query terms (nearly 23% of all query modifications) rather than adding or deleting terms. Implications are that for measuring searching traffic, unique sessions may be a better indicator than the common metric of unique visitors. This research also sheds light on the more complex aspects of Web searching involving query modifications and may lead to advances in searching tools.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the use of visual artifacts to represent a complex adaptive system (CAS). The integrated master schedule (IMS) is one of those visuals widely used in complex projects for scheduling, budgeting, and project management. In this paper, we discuss how the IMS outperforms the traditional timelines and acts as a ‘multi-level and poly-temporal boundary object’ that visually represents the CAS. We report the findings of a case study project on the way the IMS mapped interactions, interdependencies, constraints and fractal patterns in a complex project. Finally, we discuss how the IMS was utilised as a complex boundary object by eliciting commitment and development of shared mental models, and facilitating negotiation through the layers of multiple interpretations from stakeholders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Management (or perceived mismanagement) of large-scale, complex projects poses special problems and often results in spectacular failures, cost overruns, time blowouts and stakeholder dissatisfaction. While traditional project management responds with increasingly administrative constraints, we argue that leaders of such projects also need to display adaptive and enabling behaviours to foster adaptive processes, such as opportunity recognition, which requires an interaction of cognitive and affective processes of individual, project, and team leader attributes and behaviours. At the core of this model we propose is an interaction of cognitive flexibility, affect and emotional intelligence. The result of this interaction is enhanced leader opportunity recognition that, in turn, facilitates multilevel outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop a conceptual model to explore the perceived complementary congruence between complex project leaders and the demands of the complex project environment to understand how leaders’ affective and behavioural performance at work might be impacted by this fit. We propose that complex project leaders high in emotional intelligence and cognitive flexibility should report a higher level of fit between themselves and the complex project environment. This abilities-demands measure of fit should then relate to affective and behavioural performance outcomes, such that leaders who perceive a higher level of fit should establish and maintain more effective, higher quality project stakeholder relationships than leaders who perceive a lower level of fit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.