987 resultados para Graph Design
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
We consider the problem of how to maximize secure connectivity of multi-hop wireless ad hoc networks after deployment. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by secret keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one is based of increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We show that both problems are NP-hard and MAX-SNP (i.e., it is NP-hard to approximate them within a factor of 1 + e for e > 0 ) with a reduction to MAX3SAT problem. Thus, we design and implement a fully distributed algorithm for authenticated key establishment in wireless sensor networks where each sensor knows only its one- hop neighborhood. Our witness based approaches find witnesses in multi-hop neighborhood to authenticate the key establishment between two sensor nodes which do not share a key and which are not connected through a secure path.
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pair wise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighbouring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighbouring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools. Comparison to probabilistic schemes shows that our combinatorial approach produces better connectivity with smaller key-chain sizes.
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
A recent comment in the Journal of Sports Sciences (MacNamara & Collins, 2011) highlighted some major concerns with the current structure of talent identification and development (TID) programmes of Olympic athletes (e.g. Gulbin, 2008; Vaeyens, Gullich, Warr, & Philippaerts, 2009). In a cogent commentary, MacNamara and Collins (2011) provided a short review of the extant literature, which was both timely and insightful. Specifically, they criticised the ubiquitous one-dimensional ‘physically-biased’ attempts to produce world class performers, emphasising the need to consider a number of key environmental variables in a more multi-disciplinary perspective. They also lamented the wastage of talent, and alluded to the operational and opportunistic nature of current talent transfer programmes. A particularly compelling aspect of the comment was their allusion to high profile athletes who had ‘failed’ performance evaluation tests and then proceeded to succeed in that sport. This issue identifies a problem with current protocols for evaluating performance and is a line of research that is sorely needed in the area of talent development. To understand the nature of talent wastage that might be occurring in high performance programmes in sport, future empirical work should seek to follow the career paths of ‘successful’ and ‘unsuccessful’ products of TID programmes, in comparative analyses. Pertinent to the insights of MacNamara and Collins (2011), it remains clear that a number of questions have not received enough attention from sport scientists interested in talent development, including: (i) why is there so much wastage of talent in such programmes? And (ii), why are there so few reported examples of successful talent transfer programmes? These questions highlight critical areas for future investigation. The aim of this short correspondence is to discuss these and other issues researchers and practitioners might consider, and to propose how an ecological dynamics underpinning to such investigations may help the development of existing protocols...
Resumo:
Russell, Benton and Kingsley (2010) recently suggested a new association football test comprising three different tasks for the evaluation of players' passing, dribbling and shooting skills. Their stated intention was to enhance ‘ecological validity’ of current association football skills tests allowing generalisation of results from the new protocols to performance constraints that were ‘representative’ of experiences during competitive game situations. However, in this comment we raise some concerns with their use of the term ‘ecological validity’ to allude to aspects of ‘representative task design’. We propose that in their paper the authors confused understanding of environmental properties, performance achievement and generalisability of the test and its outcomes. Here, we argue that the tests designed by Russell and colleagues did not include critical sources of environmental information, such as the active role of opponents, which players typically use to organise their actions during performance. Static tasks which are not representative of the competitive performance environment may lead to different emerging patterns of movement organisation and performance outcomes, failing to effectively evaluate skills performance in sport.
Resumo:
As a result of growing evidence regarding the effects of environmental characteristics on the health and wellbeing of people in healthcare facilities (HCFs), more emphasis is being placed on, and more attention being paid to, the consequences of design choices in HCFs. Therefore, we have critically reviewed the implications of key indoor physical design parameters, in relation to their potential impact on human health and wellbeing. In addition, we discussed these findings within the context of the relevant guidelines and standards for the design of HCFs. A total of 810 abstracts, which met the inclusion criteria, were identified through a Pubmed search, and these covered journal articles, guidelines, books, reports and monographs in the studied area. Of these, 231 full publications were selected for this review. According to the literature, the most beneficial design elements were: single-bed patient rooms, safe and easily cleaned surface materials, sound-absorbing ceiling tiles, adequate and sufficient ventilation, thermal comfort, natural daylight, control over temperature and lighting, views, exposure and access to nature, and appropriate equipment, tools and furniture. The effects of some design elements, such as lighting (e.g. artificial lighting levels) and layout (e.g. decentralized versus centralized nurses’ stations), on staff and patients vary, and “the best design practice” for each HCF should always be formulated in co-operation with different user groups and a multi-professional design team. The relevant guidelines and standards should also be considered in future design, construction and renovations, in order to produce more favourable physical indoor environments in HCFs.
Resumo:
This paper characterises nitrogen and phosphorus wash-off processes on urban road surfaces to create fundamental knowledge to strengthen stormwater treatment design. The study outcomes confirmed that the composition of initially available nutrients in terms of their physical association with solids and chemical speciation determines the wash-off characteristics. Nitrogen and phosphorus wash-off processes are independent of land use, but there are notable differences. Nitrogen wash-off is a “source limiting” process while phosphorus wash-off is “transport limiting”. Additionally, a clear separation between nitrogen and phosphorus wash-off processes based on dissolved and particulate forms confirmed that the common approach of replicating nutrients wash-off based on solids wash-off could lead to misleading outcomes particularly in the case of nitrogen. Nitrogen is present primarily in dissolved and organic form and readily removed even by low intensity rainfall events, which is an important consideration for nitrogen removal targeted treatment design. In the case of phosphorus, phosphate constitutes the primary species in wash-off for the particle size fraction <75 µm, while other species are predominant in particle size range >75 µm. This means that phosphorus removal targeted treatment design should consider both phosphorus speciation as well as particle size.
Resumo:
Traffic congestion has a significant impact on the economy and environment. Encouraging the use of multimodal transport (public transport, bicycle, park’n’ride, etc.) has been identified by traffic operators as a good strategy to tackle congestion issues and its detrimental environmental impacts. A multi-modal and multi-objective trip planner provides users with various multi-modal options optimised on objectives that they prefer (cheapest, fastest, safest, etc) and has a potential to reduce congestion on both a temporal and spatial scale. The computation of multi-modal and multi-objective trips is a complicated mathematical problem, as it must integrate and utilize a diverse range of large data sets, including both road network information and public transport schedules, as well as optimising for a number of competing objectives, where fully optimising for one objective, such as travel time, can adversely affect other objectives, such as cost. The relationship between these objectives can also be quite subjective, as their priorities will vary from user to user. This paper will first outline the various data requirements and formats that are needed for the multi-modal multi-objective trip planner to operate, including static information about the physical infrastructure within Brisbane as well as real-time and historical data to predict traffic flow on the road network and the status of public transport. It will then present information on the graph data structures representing the road and public transport networks within Brisbane that are used in the trip planner to calculate optimal routes. This will allow for an investigation into the various shortest path algorithms that have been researched over the last few decades, and provide a foundation for the construction of the Multi-modal Multi-objective Trip Planner by the development of innovative new algorithms that can operate the large diverse data sets and competing objectives.
Resumo:
Graphene has promised many novel applications in nanoscale electronics and sustainable energy due to its novel electronic properties. Computational exploration of electronic functionality and how it varies with architecture and doping presently runs ahead of experimental synthesis yet provides insights into types of structures that may prove profitable for targeted experimental synthesis and characterization. We present here a summary of our understanding on the important aspects of dimension, band gap, defect, and interfacial engineering of graphene based on state-of-the-art ab initio approaches. Some most recent experimental achievements relevant for future theoretical exploration are also covered.
Resumo:
These notes were compiled from several authorities to be used for teaching and learning purposes here at QUT, with the focus on first and second year landscape architecture design studio units.
Resumo:
These notes were compiled from several authorities to be used for teaching and learning purposes here at QUT, with the focus on first and second year landscape architecture design studio units.