295 resultados para Inside-Outside Algorithm
Resumo:
Alcohol-related harms are disproportionately represented in licensed-premises. This study aimed to investigate the practices and perceived capabilities of a group of police officers who engage in policing activities in and around licensed premises in a capital city policing district in an Australian jurisdiction. Analysis of the self-reported data revealed that the 254 participants were much more likely to attend to alcohol-related incidents outside rather than inside licensed premises. Policing licensed premises that involved an alcohol-related event was perceived as the most difficult task compared to other forms of police activities, which was mirrored by low levels of perceived knowledge regarding effective intervention strategies to deal with incidents inside licensed premises. The findings have direct implication in regards to training police officers, particularly increasing their perceived knowledge and skill level to deal with incidents inside licensed premises.
Resumo:
Recently, Software as a Service (SaaS) in Cloud computing, has become more and more significant among software users and providers. To offer a SaaS with flexible functions at a low cost, SaaS providers have focused on the decomposition of the SaaS functionalities, or known as composite SaaS. This approach has introduced new challenges in SaaS resource management in data centres. One of the challenges is managing the resources allocated to the composite SaaS. Due to the dynamic environment of a Cloud data centre, resources that have been initially allocated to SaaS components may be overloaded or wasted. As such, reconfiguration for the components’ placement is triggered to maintain the performance of the composite SaaS. However, existing approaches often ignore the communication or dependencies between SaaS components in their implementation. In a composite SaaS, it is important to include these elements, as they will directly affect the performance of the SaaS. This paper will propose a Grouping Genetic Algorithm (GGA) for multiple composite SaaS application component clustering in Cloud computing that will address this gap. To the best of our knowledge, this is the first attempt to handle multiple composite SaaS reconfiguration placement in a dynamic Cloud environment. The experimental results demonstrate the feasibility and the scalability of the GGA.
Resumo:
A composite SaaS (Software as a Service) is a software that is comprised of several software components and data components. The composite SaaS placement problem is to determine where each of the components should be deployed in a cloud computing environment such that the performance of the composite SaaS is optimal. From the computational point of view, the composite SaaS placement problem is a large-scale combinatorial optimization problem. Thus, an Iterative Cooperative Co-evolutionary Genetic Algorithm (ICCGA) was proposed. The ICCGA can find reasonable quality of solutions. However, its computation time is noticeably slow. Aiming at improving the computation time, we propose an unsynchronized Parallel Cooperative Co-evolutionary Genetic Algorithm (PCCGA) in this paper. Experimental results have shown that the PCCGA not only has quicker computation time, but also generates better quality of solutions than the ICCGA.
Resumo:
Numerically investigation of natural convection within a differentially heated modified square enclosure with sinusoidally corrugated side walls has been performed for different values of Rayleigh number. The fluid inside the enclosure considered is air and is quiescent, initially. The top and bottom surfaces are flat and considered as adiabatic. Results reveal three main stages: an initial stage, a transitory or oscillatory stage and a steady stage for the development of natural convection flow inside the corrugated cavity. The numerical scheme is based on the finite element method adapted to triangular non-uniform mesh element by a non-linear parametric solution algorithm. Investigation has been performed for the Rayleigh number, Ra ranging from 105 to 108 with variation of corrugation amplitude and frequency. Constant physical properties for the fluid medium have been assumed. Results have been presented in terms of the isotherms, streamlines, temperature plots, average Nusselt numbers, traveling waves and thermal boundary layer thickness plots, temperature and velocity profiles. The effects of sudden differential heating and its consequent transient behavior on fluid flow and heat transfer characteristics have been observed for the range of governing parameters. The present results show that the transient phenomena are greatly influenced by the variation of the Rayleigh Number with corrugation amplitude and frequency.
Resumo:
The 2010 LAGI competition was held on three underutilized sites in the United Arab Emirates. By choosing Staten Island, New York in 2012 the competition organises have again brought into question new roles for public open space in the contemporary city. In the case of the UEA sites, the competition produced many entries which aimed to create a sculpture and by doing so, they attracted people to the selected empty spaces in an arid climate. In a way these proposals were the incubators and the new characters of these empty spaces. The competition was thus successful at advancing understandings of the expanded role of public open spaces in EAU and elsewhere. LAGI 2012 differs significantly to the UAE program because Fresh Kills Park has already been planned as a public open space for New Yorkers - with or without these clean energy sculptures. Furthermore, Fresh Kills Park is already an (gas) energy generating site in its own right. We believe Fresh Kills Park, as a site, presents a problem which somewhat transcends the aims of the competition brief. Advancing a sustainable urban design proposition for the site therefore requires a fundamental reconsideration of the established paradigms public open space. Hence our strategy is to not only create an energy generating, site specific art work, but to create synergy between the public and the site engagement while at the same time complement the idiosyncrasies of the pre-existing engineered landscape. Current PhD research about energy generation in public open spaces informs this work.
Resumo:
Software as a Service (SaaS) in Cloud is getting more and more significant among software users and providers recently. A SaaS that is delivered as composite application has many benefits including reduced delivery costs, flexible offers of the SaaS functions and decreased subscription cost for users. However, this approach has introduced a new problem in managing the resources allocated to the composite SaaS. The resource allocation that has been done at the initial stage may be overloaded or wasted due to the dynamic environment of a Cloud. A typical data center resource management usually triggers a placement reconfiguration for the SaaS in order to maintain its performance as well as to minimize the resource used. Existing approaches for this problem often ignore the underlying dependencies between SaaS components. In addition, the reconfiguration also has to comply with SaaS constraints in terms of its resource requirements, placement requirement as well as its SLA. To tackle the problem, this paper proposes a penalty-based Grouping Genetic Algorithm for multiple composite SaaS components clustering in Cloud. The main objective is to minimize the resource used by the SaaS by clustering its component without violating any constraint. Experimental results demonstrate the feasibility and the scalability of the proposed algorithm.
Resumo:
Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.
Resumo:
Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches to the virtual machine placement problem consider the energy consumption by physical machines in a data center only, but do not consider the energy consumption in communication network in the data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement in order to make the data center more energy-efficient. In this paper, we propose a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both the servers and the communication network in the data center. Experimental results show that the genetic algorithm performs well when tackling test problems of different kinds, and scales up well when the problem size increases.
Resumo:
Potential adverse effects on children health may result from school exposure to airborne particles. To address this issue, measurements in terms of particle number concentration, particle size distribution and black carbon (BC) concentrations were performed in three school buildings in Cassino (Italy) and its suburbs, outside and inside of the classrooms during normal occupancy and use. Additional time resolved information was gathered on ventilation condition, classroom activity, and traffic count data around the schools were obtained using a video camera. Across the three investigated school buildings, the outdoor and indoor particle number concentration monitored down to 4 nm and up to 3 m ranged from 2.8×104 part cm-3 to 4.7×104 part cm-3 and from 2.0×104 part cm-3 to 3.5×104 part cm-3, respectively. The total particle concentrations were usually higher outdoors than indoors, because no indoor sources were detected. I/O measured was less than 1 (varying in a relatively narrow range from 0.63 to 0.74), however one school exhibited indoor concentrations higher than outdoor during the morning rush hours. Particle size distribution at the outdoor site showed high particle concentrations in different size ranges, varying during the day; in relation to the starting and finishing of school time two modes were found. BC concentrations were 5 times higher at the urban school compared with the suburban and suburban-to-urban differences were larger than the relative differences of ultrafine particle concentrations.
Resumo:
Epidemiological research has consistently shown an association between fine and ultrafine particle concentrations, and increases in both respiratory and cardiovascular morbidity and mortality. These particles, often found in vehicle emissions outside buildings, can penetrate inside via their envelopes and mechanically ventilated systems. Indoor activities such as printing, cooking and cleaning, as well as the movement of building occupants are also an additional source of these particles. In this context, the filtration systems of mechanically ventilated buildings can reduce indoor particle concentrations. Several studies have quantified the efficiency of dry-media and electrostatic filters, but they mainly focused on the particle size range > 300 nm. Some others studied ultrafine particles but their investigations were conducted in laboratories. At this point, there is still only limited information on in situ filter efficiency and an incomplete understanding of filtration influence on I/O ratios of particle concentrations. To help address these gaps in knowledge and provide new information for the selection of appropriate filter types in office building HVAC systems, we aimed to: (1) measure particle concentrations at up and down stream flows of filter devices, as well as outdoor and indoor office buildings; (2) quantify efficiency of different filter types at different buildings; and (3) assess the impact of these filters on I/O ratios at different indoor and outdoor source operation scenarios.
Resumo:
A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.
Resumo:
In Australia, sentencing researchers have generally focussed on whether there is statistical equality/inequality in outcomes by reference to Indigenous status. However, contextualising the sentencing process requires us to move away from a reliance on statistical analyses alone, as this approach cannot tell us whether sentencing is an equitable process for Indigenous people. Consultation with those working at the sentencing ‘coal face’ provides valuable insight into the nexus between Indigenous status and sentencing. This article reports the main themes from surveys of the judiciary and prosecutors, and focus groups of Community Justice Groups undertaken in Queensland. The aim is to understand better the sentencing process for Indigenous Queenslanders. Results suggest that while there have been some positive developments in sentencing (e.g. the Murri Court, Community Justice Groups) Indigenous offenders still face a number of inequities.
Resumo:
Airports and cities inevitably recognise the value that each brings the other; however, the separation in decision-making authority for what to build, where, when and how provides a conundrum for both parties. Airports often want a say in what is developed outside of the airport fence, and cities often want a say in what is developed inside the airport fence. Defining how much of a say airports and cities have in decisions beyond their jurisdictional control is likely to be a topic that continues so long as airports and cities maintain separate formal decision-making processes for what to build, where, when and how. However, the recent Green and White Papers for a new National Aviation Policy have made early inroads to formalising relationships between Australia’s major airports and their host cities. At present, no clear indication (within practice or literature) is evident to the appropriateness of different governance arrangements for decisions to develop in situations that bring together the opposing strategic interests of airports and cities; thus leaving decisions for infrastructure development as complex decision-making spaces that hold airport and city/regional interests at stake. The line of enquiry is motivated by a lack of empirical research on networked decision-making domains outside of the realm of institutional theorists (Agranoff & McGuire, 2001; Provan, Fish & Sydow, 2007). That is, governance literature has remained focused towards abstract conceptualisations of organisation, without focusing on the minutia of how organisation influences action in real-world applications. A recent study by Black (2008) has provided an initial foothold for governance researchers into networked decision-making domains. This study builds upon Black’s (2008) work by aiming to explore and understand the problem space of making decisions subjected to complex jurisdictional and relational interdependencies. That is, the research examines the formal and informal structures, relationships, and forums that operationalise debates and interactions between decision-making actors as they vie for influence over deciding what to build, where, when and how in airport-proximal development projects. The research mobilises a mixture of qualitative and quantitative methods to examine three embedded cases of airport-proximal development from a network governance perspective. Findings from the research provide a new understanding to the ways in which informal actor networks underpin and combine with formal decision-making networks to create new (or realigned) governance spaces that facilitate decision-making during complex phases of development planning. The research is timely, and responds well to Isett, Mergel, LeRoux, Mischen and Rethemeyer’s (2011) recent critique of limitations within current network governance literature, specifically to their noted absence of empirical studies that acknowledge and interrogate the simultaneity of formal and informal network structures within network governance arrangements (Isett et al., 2011, pp. 162-166). The combination of social network analysis (SNA) techniques and thematic enquiry has enabled findings to document and interpret the ways in which decision-making actors organise to overcome complex problems for planning infrastructure. An innovative approach to using association networks has been used to provide insights to the importance of the different ways actors interact with one another, thus providing a simple yet valuable addition to the increasingly popular discipline of SNA. The research also identifies when and how different types of networks (i.e. formal and informal) are able to overcome currently known limitations to network governance (see McGuire & Agranoff, 2011), thus adding depth to the emerging body of network governance literature surrounding limitations to network ways of working (i.e. Rhodes, 1997a; Keast & Brown, 2002; Rethemeyer & Hatmaker, 2008; McGuire & Agranoff, 2011). Contributions are made to practice via the provision of a timely understanding of how horizontal fora between airports and their regions are used, particularly in the context of how they reframe the governance of decision-making for airport-proximal infrastructure development. This new understanding will enable government and industry actors to better understand the structural impacts of governance arrangements before they design or adopt them, particularly for factors such as efficiency of information, oversight, and responsiveness to change.
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints,including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing a significant proportion of invalid matches. The accuracy of matching in the vicinity of edges is also improved.
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints, including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing most invalid matches. The accuracy of matching in the vicinity of edges is also improved.