902 resultados para Matrix of complex negotiation
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
At the core of our uniquely human cognitive abilities is the capacity to see things from different perspectives, or to place them in a new context. We propose that this was made possible by two cognitive transitions. First, the large brain of Homo erectus facilitated the onset of recursive recall: the ability to string thoughts together into a stream of potentially abstract or imaginative thought. This hypothesis is sup-ported by a set of computational models where an artificial society of agents evolved to generate more diverse and valuable cultural outputs under conditions of recursive recall. We propose that the capacity to see things in context arose much later, following the appearance of anatomically modern humans. This second transition was brought on by the onset of contextual focus: the capacity to shift between a minimally contextual analytic mode of thought, and a highly contextual associative mode of thought, conducive to combining concepts in new ways and ‘breaking out of a rut’. When contextual focus is implemented in an art-generating computer program, the resulting artworks are seen as more creative and appealing. We summarize how both transitions can be modeled using a theory of concepts which high-lights the manner in which different contexts can lead to modern humans attributing very different meanings to the interpretation of one concept.
Resumo:
Child abuse and neglect is prevalent and entails significant costs to children, families and society. Teachers are responsible for significant proportions of official notifications to statutory child protection agencies. Hence, their accurate and appropriate reporting is crucial for well-functioning child protection systems. Approximately one-quarter of Australian teachers indicate never detecting a case of child maltreatment across their careers, while a further 13-15% admit to not reporting suspected cases in some circumstances. The detection and reporting of child abuse and neglect are complex decision-making behaviors, influenced by: the nature of the maltreatment itself; the characteristics of the teacher; the school environment; and the broader legislative and policy environment. In this chapter, the authors provide a background to teachers’ involvement in detecting and reporting child abuse and neglect, and an overview of the role of teachers is provided. Results are presented from three Australian studies that examine the unique contributions of: case; teacher; and contextual characteristics to detection and reporting behaviors. The authors conclude by highlighting the key implications for enhancing teacher training in child abuse and neglect, and outline future research directions.
Resumo:
Complex flow datasets are often difficult to represent in detail using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows (i.e., complex dynamics and time-dependent). In this paper, we review two popular texture-based techniques and their application to flow datasets sourced from real research projects. The texture-based techniques investigated were Line Integral Convolution (LIC), and Image-Based Flow Visualisation (IBFV). We evaluated these techniques and in this paper report on their visualisation effectiveness (when compared with traditional techniques), their ease of implementation, and their computational overhead.
Resumo:
Detailed representations of complex flow datasets are often difficult to generate using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows. We review two popular texture based techniques and their application to flow datasets sourced from active research projects. The techniques investigated were Line integral convolution (LIC) [1], and Image based flow visualisation (IBFV) [18]. We evaluated these and report on their effectiveness from a visualisation perspective. We also report on their ease of implementation and computational overheads.
Resumo:
A complex attack is a sequence of temporally and spatially separated legal and illegal actions each of which can be detected by various IDS but as a whole they constitute a powerful attack. IDS fall short of detecting and modeling complex attacks therefore new methods are required. This paper presents a formal methodology for modeling and detection of complex attacks in three phases: (1) we extend basic attack tree (AT) approach to capture temporal dependencies between components and expiration of an attack, (2) using enhanced AT we build a tree automaton which accepts a sequence of actions from input message streams from various sources if there is a traversal of an AT from leaves to root, and (3) we show how to construct an enhanced parallel automaton that has each tree automaton as a subroutine. We use simulation to test our methods, and provide a case study of representing attacks in WLANs.
Resumo:
We applied a texture-based flow visualisation technique to a numerical hydrodynamic model of the Pumicestone Passage in southeast Queensland, Australia. The quality of the visualisations using our flow visualisation tool, are compared with animations generated using more traditional drogue release plot and velocity contour and vector techniques. The texture-based method is found to be far more effective in visualising advective flow within the model domain. In some instances, it also makes it easier for the researcher to identify specific hydrodynamic features within the complex flow regimes of this shallow tidal barrier estuary as compared with the direct and geometric based methods.
Resumo:
Increases in functionality, power and intelligence of modern engineered systems led to complex systems with a large number of interconnected dynamic subsystems. In such machines, faults in one subsystem can cascade and affect the behavior of numerous other subsystems. This complicates the traditional fault monitoring procedures because of the need to train models of the faults that the monitoring system needs to detect and recognize. Unavoidable design defects, quality variations and different usage patterns make it infeasible to foresee all possible faults, resulting in limited diagnostic coverage that can only deal with previously anticipated and modeled failures. This leads to missed detections and costly blind swapping of acceptable components because of one’s inability to accurately isolate the source of previously unseen anomalies. To circumvent these difficulties, a new paradigm for diagnostic systems is proposed and discussed in this paper. Its feasibility is demonstrated through application examples in automotive engine diagnostics.
Resumo:
This paper presents the outcome of a study that investigated the relationships between technology prior experience, self-efficacy, technology anxiety, complexity of interface (nested versus flat) and intuitive use in older people. The findings show that, as expected, older people took less time to complete the task on the interface that used a flat structure when compared to the interface that used a complex nested structure. All age groups also used the flat interface more intuitively. However, contrary to what was hypothesised, older age groups did better under anxious conditions. Interestingly, older participants did not make significantly more errors compared with younger age groups on either interface structures.
Resumo:
The complex systems approach offers an opportunity to replace the extant pre-dominant mechanistic view on sport-related phenomena. The emphasis on the environment-system relationship, the applications of complexity principles, and the use of nonlinear dynamics mathematical tools propose a deep change in sport science. Coordination dynamics, ecological dynamics, and network approaches have been successfully applied to the study of different sport-related behaviors, from movement patterns that emerge at different scales constrained by specific sport contexts to game dynamics. Sport benefit from the use of such approaches in the understanding of technical, tactical, or physical conditioning aspects which change their meaning and dilute their frontiers. The creation of new learning and training strategies for teams and individual athletes is a main practical consequence. Some challenges for the future are investigating the influence of key control parameters in the nonlinear behavior of athlete-environment systems and the possible relatedness of the dynamics and constraints acting at different spatio-temporal scales in team sports. Modelling sport-related phenomena can make useful contributions to a better understanding of complex systems and vice-versa.
Resumo:
Quantum-inspired models have recently attracted increasing attention in Information Retrieval. An intriguing characteristic of the mathematical framework of quantum theory is the presence of complex numbers. However, it is unclear what such numbers could or would actually represent or mean in Information Retrieval. The goal of this paper is to discuss the role of complex numbers within the context of Information Retrieval. First, we introduce how complex numbers are used in quantum probability theory. Then, we examine van Rijsbergen’s proposal of evoking complex valued representations of informations objects. We empirically show that such a representation is unlikely to be effective in practice (confuting its usefulness in Information Retrieval). We then explore alternative proposals which may be more successful at realising the power of complex numbers.
Resumo:
Contemporary lipidomics protocols are dependent on conventional tandem mass spectrometry for lipid identification. This approach is extremely powerful for determining lipid class and identifying the number of carbons and the degree of unsaturation of any acyl-chain substituents. Such analyses are however, blind to isomeric variants arising from different carbon carbon bonding motifs within these chains including double bond position, chain branching, and cyclic structures. This limitation arises from the fact that conventional, low energy collision-induced dissociation of even-electron lipid ions does not give rise to product ions from intrachain fragmentation of the fatty acyl moieties. To overcome this limitation, we have applied radical-directed dissociation (RDD) to the study of lipids for the first time. In this approach, bifunctional molecules that contain a photocaged radical initiator and a lipid-adducting group, such as 4-iodoaniline and 4-iodobenzoic acid, are used to form noncovalent complexes (i.e., adduct ions) with a lipid during electrospray ionization. Laser irradiation of these complexes at UV wavelengths (266 nm) cleaves the carbon iodine bond to liberate a highly reactive phenyl radical. Subsequent activation of the nascent radical ions results in RDD with significant intrachain fragmentation of acyl moieties. This approach provides diagnostic fragments that are associated with the double bond position and the positions of chain branching in glycerophospholipids, sphingomyelins and triacylglycerols and thus can be used to differentiate isomeric lipids differing only in such motifs. RDD is demonstrated for well-defined lipid standards and also reveals lipid structural diversity in olive oil and human very-low density lipoprotein.
Resumo:
In Service-oriented Architectures, business processes can be realized by composing loosely coupled services. The problem of QoS-aware service composition is widely recognized in the literature. Existing approaches on computing an optimal solution to this problem tackle structured business processes, i.e., business processes which are composed of XOR-block, AND-block, and repeat loop orchestration components. As of yet, OR-block and unstructured orchestration components have not been sufficiently considered in the context of QoS-aware service composition. The work at hand addresses this shortcoming. An approach for computing an optimal solution to the service composition problem is proposed considering the structured orchestration components, such as AND/XOR/OR-block and repeat loop, as well as unstructured orchestration components.
Resumo:
A global, or averaged, model for complex low-pressure argon discharge plasmas containing dust grains is presented. The model consists of particle and power balance equations taking into account power loss on the dust grains and the discharge wall. The electron energy distribution is determined by a Boltzmann equation. The effects of the dust and the external conditions, such as the input power and neutral gas pressure, on the electron energy distribution, the electron temperature, the electron and ion number densities, and the dust charge are investigated. It is found that the dust subsystem can strongly affect the stationary state of the discharge by dynamically modifying the electron energy distribution, the electron temperature, the creation and loss of the plasma particles, as well as the power deposition. In particular, the power loss to the dust grains can take up a significant portion of the input power, often even exceeding the loss to the wall.