479 resultados para Topological Construct
Resumo:
In this paper an existing method for indoor Simultaneous Localisation and Mapping (SLAM) is extended to operate in large outdoor environments using an omnidirectional camera as its principal external sensor. The method, RatSLAM, is based upon computational models of the area in the rat brain that maintains the rodent’s idea of its position in the world. The system uses the visual appearance of different locations to build hybrid spatial-topological maps of places it has experienced that facilitate relocalisation and path planning. A large dataset was acquired from a dynamic campus environment and used to verify the system’s ability to construct representations of the world and simultaneously use these representations to maintain localisation.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
In topological mapping, perceptual aliasing can cause different places to appear indistinguishable to the robot. In case of severely corrupted or non-available odometry information, topological mapping is difficult as the robot is challenged with the loop-closing problem; that is to determine whether it has visited a particular place before. In this article we propose to use neighbourhood information to disambiguate otherwise indistinguishable places. Using neighbourhood information for place disambiguation is an approach that neither depends on a specific choice of sensors nor requires geometric information such as odometry. Local neighbourhood information is extracted from a sequence of observations of visited places. In experiments using either sonar or visual observations from an indoor environment the benefits of using neighbourhood clues for the disambiguation of otherwise identical vertices are demonstrated. Over 90% of the maps we obtain are isomorphic with the ground truth. The choice of the robot’s sensors does not impact the results of the experiments much.
Resumo:
This paper presents a general, global approach to the problem of robot exploration, utilizing a topological data structure to guide an underlying Simultaneous Localization and Mapping (SLAM) process. A Gap Navigation Tree (GNT) is used to motivate global target selection and occluded regions of the environment (called “gaps”) are tracked probabilistically. The process of map construction and the motion of the vehicle alters both the shape and location of these regions. The use of online mapping is shown to reduce the difficulties in implementing the GNT.
Less but more : weaving disparate disciplines together for learners to construct their own knowledge
Resumo:
This paper reports on a Professional Learning Program conducted in China with 140 general technology teachers. It aimed to integrate robotics technology across and within the disciplines of science, technology, engineering and mathematics. With the help of university facilitators teachers developed General Technology lessons that seamlessly integrated with rich learning content across disciplines. Teachers engaged in seminars and workshops, which provided the opportunities for them to actively couch sound principles of learning in their daily work. They gained first-hand experience in applying an aligned system of assessments, standards and quality learning experiences geared to the needs of each student. Teachers worked collaboratively in teams to create inquiry, design and collaborative learning activities that aligned with their curriculum and which dealt with real world problems, issues and challenges. They continually discussed and reflected deeply on the activities and shared the newly developed resources online with teachers across the entire country. It is evident from the preliminary analysis of data that teachers are beginning to apply rich pedagogical practices and are becoming ‘adaptive’ in their approach when using LEGO® robotic tools to design, redesign, create and re-create learning activities for their students.
Resumo:
Purpose The purpose of this work was to explore how men and women construct their experiences living with lymphoedema following treatment for any cancer in the context of everyday life. Methods The design and conduct of this qualitative study was guided by Charmaz’ social constructivist grounded theory. To collect data, focus groups and telephone interviews were conducted. Audiotapes were transcribed verbatim and imported into NVivo8 to organise data and codes. Data were analysed using key grounded theory principles of constant comparison, data saturation and initial, focused and theoretical coding. Results Participants were 3 men and 26 women who had developed upper- or lower-limb lymphoedema following cancer treatment. Three conceptual categories were developed during data analysis and were labelled ‘accidental journey’, ‘altered normalcy’ and ‘ebb and flow of control’. ‘Altered normalcy’ reflects the physical and psychosocial consequences of lymphoedema and its relationship to everyday life. ‘Accidental journey’ explains the participants’ experiences with the health care system, including the prevention, treatment and management of their lymphoedema. ‘Ebb and flow of control’ draws upon a range of individual and social elements that influenced the participants’ perceived control over lymphoedema. These conceptual categories were inter-related and contributed to the core category of ‘sense of self’, which describes their perceptions of their identity and roles. Conclusions Results highlight the need for greater clinical and public awareness of lymphoedema as a chronic condition requiring prevention and treatment, and one that has far-reaching effects on physical and psychosocial well-being as well as overall quality of life.
Resumo:
The research aimed to identify positive behavioural changes that people may make as a result of negotiating the aftermath of a traumatic experience, thereby extending the current cognitive model of posttraumatic growth (PTG). It was hypothesised that significant others would corroborate survivor’s cognitive and behavioural reports of PTG. The sample comprised 176 participants; 88 trauma survivors and 88 significant others. University students accounted for 64% of the sample and 36% were from the broader community. Approximately one third were male. All participants completed the Posttraumatic Growth Inventory [PTGI] and open ended questions regarding behavioural changes. PTGI scores in the survivor sample were corroborated by the significant others with only the Appreciation of Life factor of the PTGI differing between the two groups (e.g., total PTGI scores between groups explained 33.64% of variance). Nearly all of the survivors also reported positive changes in their behaviour and these changes were also corroborated by the significant others. Results provide validation of the posttraumatic growth construct and the PTGI as an instrument of measurement. Findings may also influence therapeutic practice for example, the potential usefulness of corroborating others.
Resumo:
This paper reports a study that explored a new construct: ‘climate of fear’. We hypothesised that climate of fear would vary across work sites within organisations, but not across organisations. This is in contrast a to measures of organisational culture, which were expected to vary both within and across organisations. To test our hypotheses, we developed a new 13-item measure of perceived fear in organisations and tested it in 20 sites across two organisations (N ≡ 209). Culture variables measured were innovative leadership culture, and communication culture. Results were that climate of fear did vary across sites in both organisations, while differences across organisations were not significant, as we anticipated. Organisational culture, however, varied between the organisations, and within one of the organisations. The climate of fear scale exhibited acceptable psychometric properties
Resumo:
This paper addresses the ambiguous relationship of internal, organizationa social capital and external social capital with corporate entrepreneurship performance. Drawing on social construction theory we argue that bricolage can mitigate some of the negative effects associated with social capital by recombining and redefining the purpose of available resources. We investigated our hypotheses through a random sample of 206 corporate entrepreneurship projects. We found that both internal and external social capital have no direct effect on performance of corporate entrepreneurship projects. The results indicate that bricolage mediates the relationship between social capital and performance of corporate entrepreneurship projects. Bricolage thrives in particularly when there is wide availability of social capital internal and external to the organization. The implications are that bricolage is a critical behavior in allowing corporate entrepreneur projects to benefit from resources available through their network of social relations inside and outside the company.
Resumo:
We support Shane and Venkataraman’s (2000) basic idea of an “entrepreneurship nexus” where characteristics of the actor as well as those of the “opportunity” they work on influence action and outcomes in the creation of new economic activities. However, a review of the literature reveals that minimal progress has been made on the core issues pertaining to the nexus idea. We argue that this is rooted in fundamental and insurmountable problems with the “opportunity” construct itself, and demonstrate the state of confusion in the literature caused by inconsistent use of the construct within and across works and authors. As an alternative, we suggest the admittedly subjective notion of New Venture as a more workable alternative. We provide a comprehensive definition and explanation of this construct, and take steps towards improved conceptualization and operationalization of its subdimensions. With some further work on these conceptualizations and operationalizations it will be possible to implement a comprehensive research program that can finally deliver on the promise outlined by Shane and Venkataraman (2000).
Resumo:
Conceptual modelling supports developers and users of information systems in areas of documentation, analysis or system redesign. The ongoing interest in the modelling of business processes has led to a variety of different grammars, raising the question of the quality of these grammars for modelling. An established way of evaluating the quality of a modelling grammar is by means of an ontological analysis, which can determine the extent to which grammars contain construct deficit, overload, excess or redundancy. While several studies have shown the relevance of most of these criteria, predictions about construct redundancy have yielded inconsistent results in the past, with some studies suggesting that redundancy may even be beneficial for modelling in practice. In this paper we seek to contribute to clarifying the concept of construct redundancy by introducing a revision to the ontological analysis method. Based on the concept of inheritance we propose an approach that distinguishes between specialized and distinct construct redundancy. We demonstrate the potential explanatory power of the revised method by reviewing and clarifying previous results found in the literature.
Resumo:
Finite Element modelling of bone fracture fixation systems allows computational investigation of the deformation response of the bone to load. Once validated, these models can be easily adapted to explore changes in design or configuration of a fixator. The deformation of the tissue within the fracture gap determines its healing and is often summarised as the stiffness of the construct. FE models capable of reproducing this behaviour would provide valuable insight into the healing potential of different fixation systems. Current model validation techniques lack depth in 6D load and deformation measurements. Other aspects of the FE model creation such as the definition of interfaces between components have also not been explored. This project investigated the mechanical testing and FE modelling of a bone– plate construct for the determination of stiffness. In depth 6D measurement and analysis of the generated forces, moments and movements showed large out of plane behaviours which had not previously been characterised. Stiffness calculated from the interfragmentary movement was found to be an unsuitable summary parameter as the error propagation is too large. Current FE modelling techniques were applied in compression and torsion mimicking the experimental setup. Compressive stiffness was well replicated, though torsional stiffness was not. The out of plane behaviours prevalent in the experimental work were not replicated in the model. The interfaces between the components were investigated experimentally and through modification to the FE model. Incorporation of the interface modelling techniques into the full construct models had no effect in compression but did act to reduce torsional stiffness bringing it closer to that of the experiment. The interface definitions had no effect on out of plane behaviours, which were still not replicated. Neither current nor novel FE modelling techniques were able to replicate the out of plane behaviours evident in the experimental work. New techniques for modelling loads and boundary conditions need to be developed to mimic the effects of the entire experimental system.
Resumo:
Post-transcriptional silencing of plant genes using anti-sense or co-suppression constructs usually results in only a modest proportion of silenced individuals. Recent work has demonstrated the potential for constructs encoding self-complementary 'hairpin' RNA (hpRNA) to efficiently silence genes. In this study we examine design rules for efficient gene silencing, in terms of both the proportion of independent transgenic plants showing silencing, and the degree of silencing. Using hpRNA constructs containing sense/anti-sense arms ranging from 98 to 853 nt gave efficient silencing in a wide range of plant species, and inclusion of an intron in these constructs had a consistently enhancing effect. Intron-containing constructs (ihpRNA) generally gave 90-100% of independent transgenic plants showing silencing. The degree of silencing with these constructs was much greater than that obtained using either co-suppression or anti-sense constructs. We have made a generic vector, pHANNIBAL, that allows a simple, single PCR product from a gene of interest to be easily converted into a highly effective ihpRNA silencing construct. We have also created a high-throughput vector, pHELLSGATE, that should facilitate the cloning of gene libraries or large numbers of defined genes, such as those in EST collections, using an in vitro recombinase system. This system may facilitate the large-scale determination and discovery of plant gene functions in the same way as RNAi is being used to examine gene function in Caenorhabditis elegans.