361 resultados para Topographic map complex
Resumo:
This paper investigates the use of visual artifacts to represent a complex adaptive system (CAS). The integrated master schedule (IMS) is one of those visuals widely used in complex projects for scheduling, budgeting, and project management. In this paper, we discuss how the IMS outperforms the traditional timelines and acts as a ‘multi-level and poly-temporal boundary object’ that visually represents the CAS. We report the findings of a case study project on the way the IMS mapped interactions, interdependencies, constraints and fractal patterns in a complex project. Finally, we discuss how the IMS was utilised as a complex boundary object by eliciting commitment and development of shared mental models, and facilitating negotiation through the layers of multiple interpretations from stakeholders.
Resumo:
Management (or perceived mismanagement) of large-scale, complex projects poses special problems and often results in spectacular failures, cost overruns, time blowouts and stakeholder dissatisfaction. While traditional project management responds with increasingly administrative constraints, we argue that leaders of such projects also need to display adaptive and enabling behaviours to foster adaptive processes, such as opportunity recognition, which requires an interaction of cognitive and affective processes of individual, project, and team leader attributes and behaviours. At the core of this model we propose is an interaction of cognitive flexibility, affect and emotional intelligence. The result of this interaction is enhanced leader opportunity recognition that, in turn, facilitates multilevel outcomes.
Resumo:
In this paper, we develop a conceptual model to explore the perceived complementary congruence between complex project leaders and the demands of the complex project environment to understand how leaders’ affective and behavioural performance at work might be impacted by this fit. We propose that complex project leaders high in emotional intelligence and cognitive flexibility should report a higher level of fit between themselves and the complex project environment. This abilities-demands measure of fit should then relate to affective and behavioural performance outcomes, such that leaders who perceive a higher level of fit should establish and maintain more effective, higher quality project stakeholder relationships than leaders who perceive a lower level of fit.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
In topological mapping, perceptual aliasing can cause different places to appear indistinguishable to the robot. In case of severely corrupted or non-available odometry information, topological mapping is difficult as the robot is challenged with the loop-closing problem; that is to determine whether it has visited a particular place before. In this article we propose to use neighbourhood information to disambiguate otherwise indistinguishable places. Using neighbourhood information for place disambiguation is an approach that neither depends on a specific choice of sensors nor requires geometric information such as odometry. Local neighbourhood information is extracted from a sequence of observations of visited places. In experiments using either sonar or visual observations from an indoor environment the benefits of using neighbourhood clues for the disambiguation of otherwise identical vertices are demonstrated. Over 90% of the maps we obtain are isomorphic with the ground truth. The choice of the robot’s sensors does not impact the results of the experiments much.
Resumo:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
Resumo:
The paper explores the results an on-going research project to identify factors influencing the success of international and non-English speaking background (NESB) gradúate students in the fields of Engineering and IT at three Australian universities: the Queensland University of Technology (QUT), the University of Western Australia (UWA), and Curtin University (CU). While the larger study explores the influence of factors from both sides of the supervision equation (e.g., students and supervisors), this paper focusses primarily on the results of an online survey involving 227 international and/or NESB graduate students in the areas of Engineering and IT at the three universities. The study reveals cross-cultural differences in perceptions of student and supervisor roles, as well as differences in the understanding of the requirements of graduate study within the Australian Higher Education context. We argue that in order to assist international and NESB research students to overcome such culturally embedded challenges, it is important to develop a model which recognizes the complex interactions of factors from both sides of the supervision relationship, in order to understand this cohort‟s unique pedagogical needs and develop intercultural sensitivity within postgraduate research supervision.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
We present an iterative hierarchical algorithm for multi-view stereo. The algorithm attempts to utilise as much contextual information as is available to compute highly accurate and robust depth maps. There are three novel aspects to the approach: 1) firstly we incrementally improve the depth fidelity as the algorithm progresses through the image pyramid; 2) secondly we show how to incorporate visual hull information (when available) to constrain depth searches; and 3) we show how to simultaneously enforce the consistency of the depth-map by continual comparison with neighbouring depth-maps. We show that this approach produces highly accurate depth-maps and, since it is essentially a local method, is both extremely fast and simple to implement.
Resumo:
Eukaryotic cell cycle progression is mediated by phosphorylation of protein substrates by cyclin-dependent kinases (CDKs). A critical substrate of CDKs is the product of the retinoblastoma tumor suppressor gene, pRb, which inhibits G1-S phase cell cycle progression by binding and repressing E2F transcription factors. CDK-mediated phosphorylation of pRb alleviates this inhibitory effect to promote G1-S phase cell cycle progression. pRb represses transcription by binding to the E2F transactivation domain and recruiting the mSin3·histone deacetylase (HDAC) transcriptional repressor complex via the retinoblastoma-binding protein 1 (RBP1). RBP1 binds to the pocket region of pRb via an LXCXE motif and to the SAP30 subunit of the mSin3·HDAC complex and, thus, acts as a bridging protein in this multisubunit complex. In the present study we identified RBP1 as a novel CDK substrate. RBP1 is phosphorylated by CDK2 on serines 864 and 1007, which are N- and C-terminal to the LXCXE motif, respectively. CDK2-mediated phosphorylation of RBP1 or pRb destabilizes their interaction in vitro, with concurrent phosphorylation of both proteins leading to their dissociation. Consistent with these findings, RBP1 phosphorylation is increased during progression from G 1 into S-phase, with a concurrent decrease in its association with pRb in MCF-7 breast cancer cells. These studies provide new mechanistic insights into CDK-mediated regulation of the pRb tumor suppressor during cell cycle progression, demonstrating that CDK-mediated phosphorylation of both RBP1 and pRb induces their dissociation to mediate release of the mSin3·HDAC transcriptional repressor complex from pRb to alleviate transcriptional repression of E2F.
Resumo:
Seat pressure is known as a major factor of seat comfort in vehicles. In passenger vehicles, there is lacking research into the seat comfort of rear seat occupants. As accurate seat pressure measurement requires significant effort, simulation of seat pressure is evolving as a preferred method. However, analytic methods are based on complex finite element modeling and therefore are time consuming and involve high investment. Based on accurate anthropometric measurements of 64 male subjects and outboard rear seat pressure measurements in three different passenger vehicles, this study investigates if a set of parameters derived from seat pressure mapping are sensitive enough to differentiate between different seats and whether they correlate with anthropometry in linear models. In addition to the pressure map analysis, H-Points were measured with a coordinate measurement system based on palpated body landmarks and the range of H-Point locations in the three seats is provided. It was found that for the cushion, cushion contact area and cushion front area/force could be modeled by subject anthropometry,while only seatback contact area could be modeled based on anthropometry for all three vehicles. Major differences were found between the vehicles for other parameters.
Resumo:
Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot–shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot–shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC = 0.75–0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC = 0.68–0.99) than the inexperienced rater (ICC = 0.38–0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint – MDD90 = 2.17–9.36°, tarsometatarsal joint – MDD90 = 1.03–9.29° and the metatarsophalangeal joint – MDD90 = 1.75–9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear.
Resumo:
Flow-oriented process modeling languages have a long tradition in the area of Business Process Management and are widely used for capturing activities with their behavioral and data dependencies. Individual events were introduced for triggering process instantiation and activities. However, real-world business cases drive the need for also covering complex event patterns as they are known in the field of Complex Event Processing. Therefore, this paper puts forward a catalog of requirements for handling complex events in process models, which can be used as reference framework for assessing process definition languages and systems. An assessment of BPEL and BPMN is provided.
Resumo:
To ensure infrastructure assets are procured and maintained by government on behalf of citizens, appropriate policy and institutional architecture is needed, particularly if a fundamental shift to more sustainable infrastructure is the goal. The shift in recent years from competitive and resource-intensive procurement to more collaborative and sustainable approaches to infrastructure governance is considered a major transition in infrastructure procurement systems. In order to better understand this transition in infrastructure procurement arrangements, the concept of emergence from Complex Adaptive Systems (CAS) theory is offered as a key construct. Emergence holds that micro interactions can result in emergent macro order. Applying the concept of emergence to infrastructure procurement, this research examines how interaction of agents in individual projects can result in different industry structural characteristics. The paper concludes that CAS theory, and particularly the concept of ‘emergence’, provides a useful construct to understand infrastructure procurement dynamics and progress towards sustainability.
Resumo:
This paper aims to clarify the foundations of the discipline of project management (PM). Historically, PM has evolved from a conceptual approach based on a positivist paradigm. The author questions the appropriateness of such foundations for the kind of project management which claims to deal with complex problems. To answer this question, a brief history of project management emphasizes key concepts useful to the discussion. Comprehensive definitions of knowledge, competencies, performance and knowledge management are reviewed to provide a better understanding of the project environment in terms of its present positivist epistemological position. This paper explores the tensions and paradoxes encountered in PM practice, when set within the boundaries of a normative approach; it also highlights the polysemic nature of PM, for which an extended framework is proposed. Dialectic, qualitative and interpretative aspects of PM are presented alongside its quantitative body of Knowledge. The author finally introduces an innovative overview of project management, set in the greater context of the learning organization. Implications and applications of this perspective are discussed and lead to the presentation of the MAP metamethod, a systemic practical approach.