918 resultados para Scattered trees


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose This thesis is about liveability, place and ageing in the high density urban landscape of Brisbane, Australia. As with other major developed cities around the globe, Brisbane has adopted policies to increase urban residential densities to meet the main liveability and sustainability aim of decreasing car dependence and therefore pollution, as well as to minimise the loss of greenfield areas and habitats to developers. This objective hinges on urban neighbourhoods/communities being liveable places, which residents do not have to leave for everyday living. Community/neighbourhood liveability is an essential ingredient in healthy ageing in place and has a substantial impact upon the safety, independence and well-being of older adults. It is generally accepted that ageing in place is optimal for both older people and the state. The optimality of ageing in place generally assumes that there is a particular quality to environments or standard of liveability in which people successfully age in place. The aim of this thesis was to examine if there are particular environmental qualities or aspects of liveability that test optimality and to better understand the key liveability factors that contribute to successful ageing in place. Method A strength of this thesis is that it draws on two separate studies to address the research question of what makes high density liveable for older people. In Chapter 3, the two methods are identified and differentiated as Method 1 (used in Paper 1) and Method 2 (used in Papers 2, 3, 4 and 5). Method 1 involved qualitative interviews with 24 inner city high density Brisbane residents. The major strength of this thesis is the innovative methodology outlined in the thesis as Method 2. Method 2 involved a case study approach employing qualitative and quantitative methods. Qualitative data was collected using semi-structured, in-depth interviews and time-use diaries completed by participants during the week of tracking. The quantitative data was gathered using Global Positioning Systems for tracking and Geographical Information Systems for mapping and analysis of participants’ activities. The combination of quantitative and qualitative analysis captured both participants’ subjective perceptions of their neighbourhoods and their patterns of movement. This enhanced understanding of how neighbourhoods and communities function and of the various liveability dimensions that contribute to active ageing and ageing in place for older people living in high density environments. Both studies’ participants were inner-city high density residents of Brisbane. The study based on Method 1 drew on a wider age demographic than the study based on Method 2. Findings The five papers presented in this thesis by publication indicate a complex inter-relationship of the factors that make a place liveable. The first three papers identify what is comparable and different between the physical and social factors of high density communities/neighbourhoods. The last two papers explore relationships between social engagement and broader community variables such as infrastructure and the physical built environments that are risk or protective factors relevant to community liveability, active ageing and ageing in place in high density. The research highlights the importance of creating and/or maintaining a barrier-free environment and liveable community for ageing adults. Together, the papers promote liveability, social engagement and active ageing in high density neighbourhoods by identifying factors that constitute liveability and strategies that foster active ageing and ageing in place, social connections and well-being. Recommendations There is a strong need to offer more support for active ageing and ageing in place. While the data analyses of this research provide insight into the lived experience of high density residents, further research is warranted. Further qualitative and quantitative research is needed to explore in more depth, the urban experience and opinions of older people living in urban environments. In particular, more empirical research and theory-building is needed in order to expand understanding of the particular environmental qualities that enable successful ageing in place in our cities and to guide efforts aimed at meeting this objective. The results suggest that encouraging the presence of more inner city retail outlets, particularly services that are utilised frequently in people’s daily lives such as supermarkets, medical services and pharmacies, would potentially help ensure residents fully engage in their local community. The connectivity of streets, footpaths and their role in facilitating the reaching of destinations are well understood as an important dimension of liveability. To encourage uptake of sustainable transport, the built environment must provide easy, accessible connections between buildings, walkways, cycle paths and public transport nodes. Wider streets, given that they take more time to cross than narrow streets, tend to .compromise safety - especially for older people. Similarly, the width of footpaths, the level of buffering, the presence of trees, lighting, seating and design of and distance between pedestrian crossings significantly affects the pedestrian experience for older people and impacts upon their choice of transportation. High density neighbourhoods also require greater levels of street fixtures and furniture for everyday life to make places more useable and comfortable for regular use. The importance of making the public realm useful and habitable for older people cannot be over-emphasised. Originality/value While older people are attracted to high density settings, there has been little empirical evidence linking liveability satisfaction with older people’s use of urban neighbourhoods. The current study examined the relationships between community/neighbourhood liveability, place and ageing to better understand the implications for those adults who age in place. The five papers presented in this thesis add to the understanding of what high density liveable age-friendly communities/ neighbourhoods are and what makes them so for older Australians. Neighbourhood liveability for older people is about being able to age in place and remain active. Issues of ageing in Australia and other areas of the developed world will become more critical in the coming decades. Creating livable communities for all ages calls for partnerships across all levels of government agencies and among different sectors within communities. The increasing percentage of older people in the community will have increasing political influence and it will be a foolish government who ignores the needs of an older society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The benefits of applying tree-based methods to the purpose of modelling financial assets as opposed to linear factor analysis are increasingly being understood by market practitioners. Tree-based models such as CART (classification and regression trees) are particularly well suited to analysing stock market data which is noisy and often contains non-linear relationships and high-order interactions. CART was originally developed in the 1980s by medical researchers disheartened by the stringent assumptions applied by traditional regression analysis (Brieman et al. [1984]). In the intervening years, CART has been successfully applied to many areas of finance such as the classification of financial distress of firms (see Frydman, Altman and Kao [1985]), asset allocation (see Sorensen, Mezrich and Miller [1996]), equity style timing (see Kao and Shumaker [1999]) and stock selection (see Sorensen, Miller and Ooi [2000])...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typical flow fields in a stormwater gross pollutant trap (GPT) with blocked retaining screens were experimentally captured and visualised. Particle image velocimetry (PIV) software was used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. A technique was developed to apply the Image Based Flow Visualization (IBFV) algorithm to the experimental raw dataset generated by the PIV software. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding gross pollutant capture and retention within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate specific areas and identify the flow features within the GPT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a technique that supports process participants in making risk-informed decisions, with the aim to reduce the process risks. Risk reduction involves decreasing the likelihood and severity of a process fault from occurring. Given a process exposed to risks, e.g. a financial process exposed to a risk of reputation loss, we enact this process and whenever a process participant needs to provide input to the process, e.g. by selecting the next task to execute or by filling out a form, we prompt the participant with the expected risk that a given fault will occur given the particular input. These risks are predicted by traversing decision trees generated from the logs of past process executions and considering process data, involved resources, task durations and contextual information like task frequencies. The approach has been implemented in the YAWL system and its effectiveness evaluated. The results show that the process instances executed in the tests complete with substantially fewer faults and with lower fault severities, when taking into account the recommendations provided by our technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secure communications in wireless sensor networks operating under adversarial conditions require providing pairwise (symmetric) keys to sensor nodes. In large scale deployment scenarios, there is no prior knowledge of post deployment network configuration since nodes may be randomly scattered over a hostile territory. Thus, shared keys must be distributed before deployment to provide each node a key-chain. For large sensor networks it is infeasible to store a unique key for all other nodes in the key-chain of a sensor node. Consequently, for secure communication either two nodes have a key in common in their key-chains and they have a wireless link between them, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path have a key in common. Length of the key-path is the key factor for efficiency of the design. This paper presents novel deterministic and hybrid approaches based on Combinatorial Design for deciding how many and which keys to assign to each key-chain before the sensor network deployment. In particular, Balanced Incomplete Block Designs (BIBD) and Generalized Quadrangles (GQ) are mapped to obtain efficient key distribution schemes. Performance and security properties of the proposed schemes are studied both analytically and computationally. Comparison to related work shows that the combinatorial approach produces better connectivity with smaller key-chain sizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract. For interactive systems, recognition, reproduction, and generalization of observed motion data are crucial for successful interaction. In this paper, we present a novel method for analysis of motion data that we refer to as K-OMM-trees. K-OMM-trees combine Ordered Means Models (OMMs) a model-based machine learning approach for time series with an hierarchical analysis technique for very large data sets, the K-tree algorithm. The proposed K-OMM-trees enable unsupervised prototype extraction of motion time series data with hierarchical data representation. After introducing the algorithmic details, we apply the proposed method to a gesture data set that includes substantial inter-class variations. Results from our studies show that K-OMM-trees are able to substantially increase the recognition performance and to learn an inherent data hierarchy with meaningful gesture abstractions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pairwise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighboring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighboring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secure communications between large number of sensor nodes that are randomly scattered over a hostile territory, necessitate efficient key distribution schemes. However, due to limited resources at sensor nodes such schemes cannot be based on post deployment computations. Instead, pairwise (symmetric) keys are required to be pre-distributed by assigning a list of keys, (a.k.a. key-chain), to each sensor node. If a pair of nodes does not have a common key after deployment then they must find a key-path with secured links. The objective is to minimize the keychain size while (i) maximizing pairwise key sharing probability and resilience, and (ii) minimizing average key-path length. This paper presents a deterministic key distribution scheme based on Expander Graphs. It shows how to map the parameters (e.g., degree, expansion, and diameter) of a Ramanujan Expander Graph to the desired properties of a key distribution scheme for a physical network topology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A complex attack is a sequence of temporally and spatially separated legal and illegal actions each of which can be detected by various IDS but as a whole they constitute a powerful attack. IDS fall short of detecting and modeling complex attacks therefore new methods are required. This paper presents a formal methodology for modeling and detection of complex attacks in three phases: (1) we extend basic attack tree (AT) approach to capture temporal dependencies between components and expiration of an attack, (2) using enhanced AT we build a tree automaton which accepts a sequence of actions from input message streams from various sources if there is a traversal of an AT from leaves to root, and (3) we show how to construct an enhanced parallel automaton that has each tree automaton as a subroutine. We use simulation to test our methods, and provide a case study of representing attacks in WLANs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The larvae of particular Ogmograptis spp. produce distinctive scribbles on some smooth-barked Eucalyptus spp. which are a common feature on many ornamental and forest trees in Australia. However, although they are conspicuous in the environment the systematics and biology of the genus has been poorly studied. This has been addressed through detailed field and laboratory studies of their biology of three species (O. racemosa Horak sp. nov., O. fraxinoides Horak sp. nov., O. scribula Meyrick), in conjunction with a comprehensive taxonomic revision support by a molecular phylogeny utilising the mitochondrial Cox1 and nuclear 18S genes. In brief, eggs are laid in bark depressions and the first instar larvae bore into the bark to the level where the future cork cambium forms (the phellegen). Early instar larvae bore wide, arcing tracks in this layer before forming a tighter zig-zag shaped pattern. The second last instar turns and bores either closely parallel to the initial mine or doubles its width, along the zig-zag shaped mine. The final instar possesses legs and a spinneret (unlike the earlier instars) and feeds exclusively on callus tissue which forms within the zig-zag shaped mine formed by the previous instar, before emerging from the bark to pupate at the base of the tree. The scars of mines them become visible scribble following the shedding of bark. Sequence data confirm the placement of Ogmograptis within the Bucculatricidae, suggest that the larvae responsible for the ‘ghost scribbles’ (unpigmented, raised scars found on smooth-barked eucalypts) are members of the genus Tritymba, and support the morphology-based species groups proposed for Ogmograptis. The formerly monotypic genus Ogmograptis Meyrick is revised and divided into three species groups. Eleven new species are described: Ogmograptis fraxinoides Horak sp. nov., Ogmograptis racemosa Horak sp. nov. and Ogmograptis pilularis Horak sp. nov. forming the scribula group with Ogmograptis scribula Meyrick; Ogmograptis maxdayi Horak sp. nov., Ogmograptis barloworum Horak sp. nov., Ogmograptis paucidentatus Horak sp. nov., Ogmograptis rodens Horak sp. nov., Ogmograptis bignathifer Horak sp. nov. and Ogmograptis inornatus Horak sp. nov. as the maxdayi group; Ogmograptis bipunctatus Horak sp. nov., Ogmograptis pulcher Horak sp. nov., Ogmograptis triradiata (Turner) comb. nov. and Ogmograptis centrospila (Turner) comb. nov. as the triradiata group. Ogmograptis notosema (Meyrick) cannot be assigned to a species group as the holotype has not been located. Three unique synapomorphies, all derived from immatures, redefine the family Bucculatricidae, uniting Ogmograptis, Tritymba Meyrick (both Australian) and Leucoedemia Scoble & Scholtz (African) with Bucculatrix Zeller, which is the sister group of the southern hemisphere genera. The systematic history of Ogmograptis and the Bucculatricidae is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Threats against computer networks evolve very fast and require more and more complex measures. We argue that teams respectively groups with a common purpose for intrusion detection and prevention improve the measures against rapid propagating attacks similar to the concept of teams solving complex tasks known from field of work sociology. Collaboration in this sense is not easy task especially for heterarchical environments. We propose CIMD (collaborative intrusion and malware detection) as a security overlay framework to enable cooperative intrusion detection approaches. Objectives and associated interests are used to create detection groups for exchange of security-related data. In this work, we contribute a tree-oriented data model for device representation in the scope of security. We introduce an algorithm for the formation of detection groups, show realization strategies for the system and conduct vulnerability analysis. We evaluate the benefit of CIMD by simulation and probabilistic analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a formal methodology for attack modeling and detection for networks. Our approach has three phases. First, we extend the basic attack tree approach 1 to capture (i) the temporal dependencies between components, and (ii) the expiration of an attack. Second, using the enhanced attack trees (EAT) we build a tree automaton that accepts a sequence of actions from input stream if there is a traverse of an attack tree from leaves to the root node. Finally, we show how to construct an enhanced parallel automaton (EPA) that has each tree automaton as a subroutine and can process the input stream by considering multiple trees simultaneously. As a case study, we show how to represent the attacks in IEEE 802.11 and construct an EPA for it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pair wise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighbouring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighbouring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools. Comparison to probabilistic schemes shows that our combinatorial approach produces better connectivity with smaller key-chain sizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data structures such as k-D trees and hierarchical k-means trees perform very well in approximate k nearest neighbour matching, but are only marginally more effective than linear search when performing exact matching in high-dimensional image descriptor data. This paper presents several improvements to linear search that allows it to outperform existing methods and recommends two approaches to exact matching. The first method reduces the number of operations by evaluating the distance measure in order of significance of the query dimensions and terminating when the partial distance exceeds the search threshold. This method does not require preprocessing and significantly outperforms existing methods. The second method improves query speed further by presorting the data using a data structure called d-D sort. The order information is used as a priority queue to reduce the time taken to find the exact match and to restrict the range of data searched. Construction of the d-D sort structure is very simple to implement, does not require any parameter tuning, and requires significantly less time than the best-performing tree structure, and data can be added to the structure relatively efficiently.