959 resultados para ORGANIZATION DESIGN
Resumo:
The Queensland Government released its new Environmental Offset Policy in July 2008. This policy creates a set of overarching principles which are to be incorporated into existing environmental offset policy. This article is the final article in a set of three interrelated articles discussing the operation and implementation of environmental offsets in Queensland. The first article discusses the Environmental Offsets Discussion Paper and the existing environmental offset requirements. No significant changes have been made to these existing offset requirements under the new Environmental Offset Policy. This article also touches briefly on the legal issues associated with design and implementation of environmental offset and trading frameworks. The second article considered the compatibility of different land tenure arrangements in Queensland against the requirements for the creation and trade of environmental offsets. The third article being the present article, discusses the application of the new Environmental Offset Policy while also analysing the legal issues associated with environmental offsets in further detail.
Resumo:
Threats against computer networks evolve very fast and require more and more complex measures. We argue that teams respectively groups with a common purpose for intrusion detection and prevention improve the measures against rapid propagating attacks similar to the concept of teams solving complex tasks known from field of work sociology. Collaboration in this sense is not easy task especially for heterarchical environments. We propose CIMD (collaborative intrusion and malware detection) as a security overlay framework to enable cooperative intrusion detection approaches. Objectives and associated interests are used to create detection groups for exchange of security-related data. In this work, we contribute a tree-oriented data model for device representation in the scope of security. We introduce an algorithm for the formation of detection groups, show realization strategies for the system and conduct vulnerability analysis. We evaluate the benefit of CIMD by simulation and probabilistic analysis.
Resumo:
Reasoning with uncertain knowledge and belief has long been recognized as an important research issue in Artificial Intelligence (AI). Several methodologies have been proposed in the past, including knowledge-based systems, fuzzy sets, and probability theory. The probabilistic approach became popular mainly due to a knowledge representation framework called Bayesian networks. Bayesian networks have earned reputation of being powerful tools for modeling complex problem involving uncertain knowledge. Uncertain knowledge exists in domains such as medicine, law, geographical information systems and design as it is difficult to retrieve all knowledge and experience from experts. In design domain, experts believe that design style is an intangible concept and that its knowledge is difficult to be presented in a formal way. The aim of the research is to find ways to represent design style knowledge in Bayesian net works. We showed that these networks can be used for diagnosis (inferences) and classification of design style. The furniture design style is selected as an example domain, however the method can be used for any other domain.
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
Key distribution is one of the most challenging security issues in wireless sensor networks where sensor nodes are randomly scattered over a hostile territory. In such a sensor deployment scenario, there will be no prior knowledge of post deployment configuration. For security solutions requiring pair wise keys, it is impossible to decide how to distribute key pairs to sensor nodes before the deployment. Existing approaches to this problem are to assign more than one key, namely a key-chain, to each node. Key-chains are randomly drawn from a key-pool. Either two neighbouring nodes have a key in common in their key-chains, or there is a path, called key-path, among these two nodes where each pair of neighbouring nodes on this path has a key in common. Problem in such a solution is to decide on the key-chain size and key-pool size so that every pair of nodes can establish a session key directly or through a path with high probability. The size of the key-path is the key factor for the efficiency of the design. This paper presents novel, deterministic and hybrid approaches based on Combinatorial Design for key distribution. In particular, several block design techniques are considered for generating the key-chains and the key-pools. Comparison to probabilistic schemes shows that our combinatorial approach produces better connectivity with smaller key-chain sizes.
Resumo:
Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.
Resumo:
Good daylighting design in buildings not only provides a comfortable luminous environment, but also delivers energy savings and comfortable and healthy environments for building occupants. Yet, there is still no consensus on how to assess what constitutes good daylighting design. Currently amongst building performance guidelines, Daylighting factors (DF) or minimum illuminance values are the standard; however, previous research has shown the shortcomings of these metrics. New computer software for daylighting analysis contains new more advanced metrics for daylighting (Climate Base Daylight Metrics-CBDM). Yet, these tools (new metrics or simulation tools) are not currently understood by architects and are not used within architectural firms in Australia. A survey of architectural firms in Brisbane showed the most relevant tools used by industry. The purpose of this paper is to assess and compare these computer simulation tools and new tools available architects and designers for daylighting. The tools are assessed in terms of their ease of use (e.g. previous knowledge required, complexity of geometry input, etc.), efficiency (e.g. speed, render capabilities, etc.) and outcomes (e.g. presentation of results, etc. The study shows tools that are most accessible for architects, are those that import a wide variety of files, or can be integrated into the current 3d modelling software or package. These software’s need to be able to calculate for point in times simulations, and annual analysis. There is a current need in these software solutions for an open source program able to read raw data (in the form of spreadsheets) and show that graphically within a 3D medium. Currently, development into plug-in based software’s are trying to solve this need through third party analysis, however some of these packages are heavily reliant and their host program. These programs however which allow dynamic daylighting simulation, which will make it easier to calculate accurate daylighting no matter which modelling platform the designer uses, while producing more tangible analysis today, without the need to process raw data.
Resumo:
Purpose – The purpose of this paper is to provide a new type of entry mode decision-making model for construction enterprises involved in international business. Design/methodology/approach – A hybrid method combining analytic hierarchy process (AHP) with preference ranking organization method for enrichment evaluations (PROMETHEE) is used to aid entry mode decisions. The AHP is used to decompose the entry mode problem into several dimensions and determine the weight of each criterion. In addition, PROMETHEE method is used to rank candidate entry modes and carry out sensitivity analyses. Findings – The proposed decision-making method is demonstrated to be a suitable approach to resolve the entry mode selection decision problem. Practical implications – The research provides practitioners with a more systematic decision framework and a more precise decision method. Originality/value – The paper sheds light on the further development of entry strategies for international construction markets. It not only introduces a new decision-making model for entry mode decision making, but also provides a conceptual framework with five determinants for a construction company entry mode selection based on the unique properties of the construction industry.
Resumo:
A recent comment in the Journal of Sports Sciences (MacNamara & Collins, 2011) highlighted some major concerns with the current structure of talent identification and development (TID) programmes of Olympic athletes (e.g. Gulbin, 2008; Vaeyens, Gullich, Warr, & Philippaerts, 2009). In a cogent commentary, MacNamara and Collins (2011) provided a short review of the extant literature, which was both timely and insightful. Specifically, they criticised the ubiquitous one-dimensional ‘physically-biased’ attempts to produce world class performers, emphasising the need to consider a number of key environmental variables in a more multi-disciplinary perspective. They also lamented the wastage of talent, and alluded to the operational and opportunistic nature of current talent transfer programmes. A particularly compelling aspect of the comment was their allusion to high profile athletes who had ‘failed’ performance evaluation tests and then proceeded to succeed in that sport. This issue identifies a problem with current protocols for evaluating performance and is a line of research that is sorely needed in the area of talent development. To understand the nature of talent wastage that might be occurring in high performance programmes in sport, future empirical work should seek to follow the career paths of ‘successful’ and ‘unsuccessful’ products of TID programmes, in comparative analyses. Pertinent to the insights of MacNamara and Collins (2011), it remains clear that a number of questions have not received enough attention from sport scientists interested in talent development, including: (i) why is there so much wastage of talent in such programmes? And (ii), why are there so few reported examples of successful talent transfer programmes? These questions highlight critical areas for future investigation. The aim of this short correspondence is to discuss these and other issues researchers and practitioners might consider, and to propose how an ecological dynamics underpinning to such investigations may help the development of existing protocols...
Resumo:
Russell, Benton and Kingsley (2010) recently suggested a new association football test comprising three different tasks for the evaluation of players' passing, dribbling and shooting skills. Their stated intention was to enhance ‘ecological validity’ of current association football skills tests allowing generalisation of results from the new protocols to performance constraints that were ‘representative’ of experiences during competitive game situations. However, in this comment we raise some concerns with their use of the term ‘ecological validity’ to allude to aspects of ‘representative task design’. We propose that in their paper the authors confused understanding of environmental properties, performance achievement and generalisability of the test and its outcomes. Here, we argue that the tests designed by Russell and colleagues did not include critical sources of environmental information, such as the active role of opponents, which players typically use to organise their actions during performance. Static tasks which are not representative of the competitive performance environment may lead to different emerging patterns of movement organisation and performance outcomes, failing to effectively evaluate skills performance in sport.
Resumo:
As a result of growing evidence regarding the effects of environmental characteristics on the health and wellbeing of people in healthcare facilities (HCFs), more emphasis is being placed on, and more attention being paid to, the consequences of design choices in HCFs. Therefore, we have critically reviewed the implications of key indoor physical design parameters, in relation to their potential impact on human health and wellbeing. In addition, we discussed these findings within the context of the relevant guidelines and standards for the design of HCFs. A total of 810 abstracts, which met the inclusion criteria, were identified through a Pubmed search, and these covered journal articles, guidelines, books, reports and monographs in the studied area. Of these, 231 full publications were selected for this review. According to the literature, the most beneficial design elements were: single-bed patient rooms, safe and easily cleaned surface materials, sound-absorbing ceiling tiles, adequate and sufficient ventilation, thermal comfort, natural daylight, control over temperature and lighting, views, exposure and access to nature, and appropriate equipment, tools and furniture. The effects of some design elements, such as lighting (e.g. artificial lighting levels) and layout (e.g. decentralized versus centralized nurses’ stations), on staff and patients vary, and “the best design practice” for each HCF should always be formulated in co-operation with different user groups and a multi-professional design team. The relevant guidelines and standards should also be considered in future design, construction and renovations, in order to produce more favourable physical indoor environments in HCFs.
Resumo:
This paper characterises nitrogen and phosphorus wash-off processes on urban road surfaces to create fundamental knowledge to strengthen stormwater treatment design. The study outcomes confirmed that the composition of initially available nutrients in terms of their physical association with solids and chemical speciation determines the wash-off characteristics. Nitrogen and phosphorus wash-off processes are independent of land use, but there are notable differences. Nitrogen wash-off is a “source limiting” process while phosphorus wash-off is “transport limiting”. Additionally, a clear separation between nitrogen and phosphorus wash-off processes based on dissolved and particulate forms confirmed that the common approach of replicating nutrients wash-off based on solids wash-off could lead to misleading outcomes particularly in the case of nitrogen. Nitrogen is present primarily in dissolved and organic form and readily removed even by low intensity rainfall events, which is an important consideration for nitrogen removal targeted treatment design. In the case of phosphorus, phosphate constitutes the primary species in wash-off for the particle size fraction <75 µm, while other species are predominant in particle size range >75 µm. This means that phosphorus removal targeted treatment design should consider both phosphorus speciation as well as particle size.
Resumo:
Compression ignition (CI) engine design is subject to many constraints which presents a multi-criteria optimisation problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient, but must also deliver low gaseous, particulate and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming are minimised. Consequently, this study undertakes a multi-criteria analysis which seeks to identify alternative fuels, injection technologies and combustion strategies that could potentially satisfy these CI engine design constraints. Three datasets are analysed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of 1): an ethanol fumigation system, 2): alternative fuels (20 % biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and 3): various biodiesel fuels made from 3 feedstocks (i.e. soy, tallow, and canola) tested at several blend percentages (20-100 %) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20 % by energy) at moderate load, high percentage soy blends (60-100 %), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most “preferred” solutions to this multi-criteria engine design problem. Further research is, however, required to reduce Reactive Oxygen Species (ROS) emissions with alternative fuels, and to deliver technologies that do not significantly reduce the median diameter of particle emissions.