999 resultados para Complex collaborations


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The world’s increasing complexity, competitiveness, interconnectivity, and dependence on technology generate new challenges for nations and individuals that cannot be met by continuing education as usual (Katehi, Pearson, & Feder, 2009). With the proliferation of complex systems have come new technologies for communication, collaboration, and conceptualisation. These technologies have led to significant changes in the forms of mathematical and scientific thinking that are required beyond the classroom. Modelling, in its various forms, can develop and broaden children’s mathematical and scientific thinking beyond the standard curriculum. This paper first considers future competencies in the mathematical sciences within an increasingly complex world. Next, consideration is given to interdisciplinary problem solving and models and modelling. Examples of complex, interdisciplinary modelling activities across grades are presented, with data modelling in 1st grade, model-eliciting in 4th grade, and engineering-based modelling in 7th-9th grades.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The world’s increasing complexity, competitiveness, interconnectivity, and dependence on technology generate new challenges for nations and individuals that cannot be met by “continuing education as usual” (The National Academies, 2009). With the proliferation of complex systems have come new technologies for communication, collaboration, and conceptualization. These technologies have led to significant changes in the forms of mathematical thinking that are required beyond the classroom. This paper argues for the need to incorporate future-oriented understandings and competencies within the mathematics curriculum, through intellectually stimulating activities that draw upon multidisciplinary content and contexts. The paper also argues for greater recognition of children’s learning potential, as increasingly complex learners capable of dealing with cognitively demanding tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is unprecedented worldwide demand for mathematical solutions to complex problems. That demand has generated a further call to update mathematics education in a way that develops students’ abilities to deal with complex systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relational governance arrangements across agencies and sectors have become prevalent as a means for government to become more responsive and effective in addressing complex, large scale or ‘wicked’ problems. The primary characteristic of such ‘collaborative’ arrangements is the utilisation of the joint capacities of multiple organisations to achieve collaborative advantage, which Huxham (1993) defines as the attainment of creative outcomes that are beyond the ability of single agencies to achieve. Attaining collaborative advantage requires organisations to develop collaborative capabilities that prepare organisations for collaborative practice (Huxham, 1993b). Further, collaborations require considerable investment of staff effort that could potentially be used beneficially elsewhere by both the government and non-government organisations involved in collaboration (Keast and Mandell, 2010). Collaborative arrangements to deliver services therefore requires a reconsideration of the way in which resources, including human resources, are conceptualised and deployed as well as changes to both the structure of public service agencies and the systems and processes by which they operate (Keast, forthcoming). A main aim of academic research and theorising has been to explore and define the requisite characteristics to achieve collaborative advantage. Such research has tended to focus on definitional, structural (Turrini, Cristofoli, Frosini, & Nasi, 2009) and organisational (Huxham, 1993) aspects and less on the roles government plays within cross-organisational or cross-sectoral arrangements. Ferlie and Steane (2002) note that there has been a general trend towards management led reforms of public agencies including the HRM practices utilised. Such trends have been significantly influenced by New Public Management (NPM) ideology with limited consideration to the implications for HRM practice in collaborative, rather than market contexts. Utilising case study data of a suite of collaborative efforts in Queensland, Australia, collected over a decade, this paper presents an examination of the network roles government agencies undertake. Implications for HRM in public sector agencies working within networked arrangements are drawn and implications for job design, recruitment, deployment and staff development are presented. The paper also makes theoretical advances in our understanding of Strategic Human Resource Management (SHRM) in network settings. While networks form part of the strategic armoury of government, networks operate to achieve collaborative advantage. SHRM with its focus on competitive advantage is argued to be appropriate in market situations, however is not an ideal conceptualisation in network situations. Commencing with an overview of literature on networks and network effectiveness, the paper presents the case studies and methodology; provides findings from the case studies in regard to the roles of government to achieve collaborative advantage and implications for HRM practice are presented. Implications for SHRM are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses exploratory research to identify the reported leadership challenges faced by leaders in the public sector in Australia and what specific leadership practices they engage in to deal with these challenges. Emerging is a sense that leadership in these complex work environments is not about controlling or mandating action but about engaging in conversation, building relationships and empowering staff to engage in innovative ways to solve complex problems. In addition leaders provide a strong sense of purpose and identity to guide behaviour and decisions to overcome being overwhelmed by the sheer volume of demands in a unpredictable and often unsupportive environment. Questions are raised as to the core competencies leaders need to develop to drive and underpin these leadership practices and the implications this has for the focus on future leadership development programmes. The possible direction of a future research programme will be put forward for further discussion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the use of visual artifacts to represent a complex adaptive system (CAS). The integrated master schedule (IMS) is one of those visuals widely used in complex projects for scheduling, budgeting, and project management. In this paper, we discuss how the IMS outperforms the traditional timelines and acts as a ‘multi-level and poly-temporal boundary object’ that visually represents the CAS. We report the findings of a case study project on the way the IMS mapped interactions, interdependencies, constraints and fractal patterns in a complex project. Finally, we discuss how the IMS was utilised as a complex boundary object by eliciting commitment and development of shared mental models, and facilitating negotiation through the layers of multiple interpretations from stakeholders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Management (or perceived mismanagement) of large-scale, complex projects poses special problems and often results in spectacular failures, cost overruns, time blowouts and stakeholder dissatisfaction. While traditional project management responds with increasingly administrative constraints, we argue that leaders of such projects also need to display adaptive and enabling behaviours to foster adaptive processes, such as opportunity recognition, which requires an interaction of cognitive and affective processes of individual, project, and team leader attributes and behaviours. At the core of this model we propose is an interaction of cognitive flexibility, affect and emotional intelligence. The result of this interaction is enhanced leader opportunity recognition that, in turn, facilitates multilevel outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we develop a conceptual model to explore the perceived complementary congruence between complex project leaders and the demands of the complex project environment to understand how leaders’ affective and behavioural performance at work might be impacted by this fit. We propose that complex project leaders high in emotional intelligence and cognitive flexibility should report a higher level of fit between themselves and the complex project environment. This abilities-demands measure of fit should then relate to affective and behavioural performance outcomes, such that leaders who perceive a higher level of fit should establish and maintain more effective, higher quality project stakeholder relationships than leaders who perceive a lower level of fit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper explores the results an on-going research project to identify factors influencing the success of international and non-English speaking background (NESB) gradúate students in the fields of Engineering and IT at three Australian universities: the Queensland University of Technology (QUT), the University of Western Australia (UWA), and Curtin University (CU). While the larger study explores the influence of factors from both sides of the supervision equation (e.g., students and supervisors), this paper focusses primarily on the results of an online survey involving 227 international and/or NESB graduate students in the areas of Engineering and IT at the three universities. The study reveals cross-cultural differences in perceptions of student and supervisor roles, as well as differences in the understanding of the requirements of graduate study within the Australian Higher Education context. We argue that in order to assist international and NESB research students to overcome such culturally embedded challenges, it is important to develop a model which recognizes the complex interactions of factors from both sides of the supervision relationship, in order to understand this cohort‟s unique pedagogical needs and develop intercultural sensitivity within postgraduate research supervision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eukaryotic cell cycle progression is mediated by phosphorylation of protein substrates by cyclin-dependent kinases (CDKs). A critical substrate of CDKs is the product of the retinoblastoma tumor suppressor gene, pRb, which inhibits G1-S phase cell cycle progression by binding and repressing E2F transcription factors. CDK-mediated phosphorylation of pRb alleviates this inhibitory effect to promote G1-S phase cell cycle progression. pRb represses transcription by binding to the E2F transactivation domain and recruiting the mSin3·histone deacetylase (HDAC) transcriptional repressor complex via the retinoblastoma-binding protein 1 (RBP1). RBP1 binds to the pocket region of pRb via an LXCXE motif and to the SAP30 subunit of the mSin3·HDAC complex and, thus, acts as a bridging protein in this multisubunit complex. In the present study we identified RBP1 as a novel CDK substrate. RBP1 is phosphorylated by CDK2 on serines 864 and 1007, which are N- and C-terminal to the LXCXE motif, respectively. CDK2-mediated phosphorylation of RBP1 or pRb destabilizes their interaction in vitro, with concurrent phosphorylation of both proteins leading to their dissociation. Consistent with these findings, RBP1 phosphorylation is increased during progression from G 1 into S-phase, with a concurrent decrease in its association with pRb in MCF-7 breast cancer cells. These studies provide new mechanistic insights into CDK-mediated regulation of the pRb tumor suppressor during cell cycle progression, demonstrating that CDK-mediated phosphorylation of both RBP1 and pRb induces their dissociation to mediate release of the mSin3·HDAC transcriptional repressor complex from pRb to alleviate transcriptional repression of E2F.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot–shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot–shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC = 0.75–0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC = 0.68–0.99) than the inexperienced rater (ICC = 0.38–0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint – MDD90 = 2.17–9.36°, tarsometatarsal joint – MDD90 = 1.03–9.29° and the metatarsophalangeal joint – MDD90 = 1.75–9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear.