944 resultados para Thick Level-Set
Resumo:
"January 1975."
Resumo:
Includes index.
Resumo:
Association rule mining has made many advances in the area of knowledge discovery. However, the quality of the discovered association rules is a big concern and has drawn more and more attention recently. One problem with the quality of the discovered association rules is the huge size of the extracted rule set. Often for a dataset, a huge number of rules can be extracted, but many of them can be redundant to other rules and thus useless in practice. Mining non-redundant rules is a promising approach to solve this problem. In this paper, we firstly propose a definition for redundancy; then we propose a concise representation called Reliable basis for representing non-redundant association rules for both exact rules and approximate rules. An important contribution of this paper is that we propose to use the certainty factor as the criteria to measure the strength of the discovered association rules. With the criteria, we can determine the boundary between redundancy and non-redundancy to ensure eliminating as many redundant rules as possible without reducing the inference capacity of and the belief to the remaining extracted non-redundant rules. We prove that the redundancy elimination based on the proposed Reliable basis does not reduce the belief to the extracted rules. We also prove that all association rules can be deduced from the Reliable basis. Therefore the Reliable basis is a lossless representation of association rules. Experimental results show that the proposed Reliable basis can significantly reduce the number of extracted rules.
Resumo:
In the emerging literature related to destination branding, little has been reported about performance metrics. The focus of most research reported to date has been concerned with the development of destination brand identities and the implementation of campaigns (see for example, Crockett & Wood 1999, Hall 1999, May 2001, Morgan et al 2002). One area requiring increased attention is that of tracking the performance of destination brands over time. This is an important gap in the tourism literature, given: i) the increasing level of investment by destination marketing organisations (DMO) in branding since the 1990s, ii) the complex political nature of DMO brand decision-making and increasing accountability to stakeholders (see Pike, 2005), and iii) the long-term nature of repositioning a destination’s image in the market place (see Gartner & Hunt, 1987). Indeed, a number of researchers in various parts of the world have pointed to a lack of market research monitoring destination marketing objectives, such as in Australia (see Prosser et. al 2000, Carson, Beattie and Gove 2003), North America (Sheehan & Ritchie 1997, Masberg 1999), and Europe (Dolnicar & Schoesser 2003)...
Resumo:
Managing project-based learning is becoming an increasingly important part of project management. This article presents a comparative case study of 12 cases of knowledge transfer between temporary inter-organizational projects and permanent parent organizations. Our set-theoretic analysis of these data yields two major findings. First, a high level of absorptive capacity of the project owner is a necessary condition for successful project knowledge transfer, which implies that the responsibility for knowledge transfer seems to in the first place lie with the project parent organization, not with the project manager. Second, none of the factors are sufficient by themselves. This implies that successful project knowledge transfer is a complex process always involving configurations of multiple factors. We link these implications with the view of projects as complex temporary organizational forms in which successful project managers need to cope with complexity by simultaneously paying attention to both relational and organizational processes.
Resumo:
Purpose – As a consequence of rapid urbanisation and globalisation, cities have become the engines of population and economic growth. Hence, natural resources in and around the cities have been exposed to externalities of urban development processes. This paper introduces a new sustainability assessment approach that is tested in a pilot study. The paper aims to assist policy-makers and planners investigating the impacts of development on environmental systems, and produce effective policies for sustainable urban development. Design/methodology/approach – The paper introduces an indicator-based indexing model entitled “Indexing Model for the Assessment of Sustainable Urban Ecosystems” (ASSURE). The ASSURE indexing model produces a set of micro-level environmental sustainability indices that is aimed to be used in the evaluation and monitoring of the interaction between human activities and urban ecosystems. The model is an innovative approach designed to assess the resilience of ecosystems towards impacts of current development plans and the results serve as a guide for policymakers to take actions towards achieving sustainability. Findings – The indexing model has been tested in a pilot case study within the Gold Coast City, Queensland, Australia. This paper presents the methodology of the model and outlines the preliminary findings of the pilot study. The paper concludes with a discussion on the findings and recommendations put forward for future development and implementation of the model. Originality/value – Presently, there is a few sustainability indices developed to measure the sustainability at local, regional, national and international levels. However, due to challenges in data collection difficulties and availability of local data, there is no effective assessment model at the microlevel that the assessment of urban ecosystem sustainability accurately. The model introduced in this paper fills this gap by focusing on parcel-scale and benchmarking the environmental performance in micro-level.
Resumo:
The purpose of this study was to determine factors (internal and external) that influenced Canadian provincial (state) politicians when making funding decisions about public libraries. Using the case study methodology, Canadian provincial/state level funding for public libraries in the 2009-10 fiscal year was examined. After reviewing funding levels across the country, three jurisdictions were chosen for the case: British Columbia's budget revealed dramatically decreased funding, Alberta's budget showed dramatically increased funding, and Ontario's budget was unchanged from the previous year. The primary source of data for the case was a series of semi-structured interviews with elected officials and senior bureaucrats from the three jurisdictions. An examination of primary and secondary documents was also undertaken to help set the political and economic context as well as to provide triangulation for the case interviews. The data were analysed to determine whether Cialdini's theory of influence (2001) and specifically any of the six tactics of influence (i.e, commitment and consistency, authority, liking, social proof, scarcity and reciprocity) were instrumental in these budget processes. Findings show the principles of "authority", "consistency and commitment" and "liking" were relevant, and that "liking" were especially important to these decisions. When these decision makers were considering funding for public libraries, they most often used three distinct lenses: the consistency lens (what are my values? what would my party do?), the authority lens (is someone with hierarchical power telling me to do this? are the requests legitimate?), and most importantly, the liking lens (how much do I like and know about the requester?). These findings are consistent with Cialdini's theory, which suggests the quality of some relationships is one of six factors that can most influence a decision maker. The small number of prior research studies exploring the reasons for increases or decreases in public library funding allocation decisions have given little insight into the factors that motivate those politicians involved in the process and the variables that contribute to these decisions. No prior studies have examined the construct of influence in decision making about funding for Canadian public libraries at any level of government. Additionally, no prior studies have examined the construct of influence in decision making within the context of Canadian provincial politics. While many public libraries are facing difficult decisions in the face of uncertain funding futures, the ability of the sector to obtain favourable responses to requests for increases may require a less simplistic approach than previously thought. The ability to create meaningful connections with individuals in many communities and across all levels of government should be emphasised as a key factor in influencing funding decisions.
Resumo:
During the last several decades, the quality of natural resources and their services have been exposed to significant degradation from increased urban populations combined with the sprawl of settlements, development of transportation networks and industrial activities (Dorsey, 2003; Pauleit et al., 2005). As a result of this environmental degradation, a sustainable framework for urban development is required to provide the resilience of natural resources and ecosystems. Sustainable urban development refers to the management of cities with adequate infrastructure to support the needs of its population for the present and future generations as well as maintain the sustainability of its ecosystems (UNEP/IETC, 2002; Yigitcanlar, 2010). One of the important strategic approaches for planning sustainable cities is „ecological planning‟. Ecological planning is a multi-dimensional concept that aims to preserve biodiversity richness and ecosystem productivity through the sustainable management of natural resources (Barnes et al., 2005). As stated by Baldwin (1985, p.4), ecological planning is the initiation and operation of activities to direct and control the acquisition, transformation, disruption and disposal of resources in a manner capable of sustaining human activities with a minimum disruption of ecosystem processes. Therefore, ecological planning is a powerful method for creating sustainable urban ecosystems. In order to explore the city as an ecosystem and investigate the interaction between the urban ecosystem and human activities, a holistic urban ecosystem sustainability assessment approach is required. Urban ecosystem sustainability assessment serves as a tool that helps policy and decision-makers in improving their actions towards sustainable urban development. There are several methods used in urban ecosystem sustainability assessment among which sustainability indicators and composite indices are the most commonly used tools for assessing the progress towards sustainable land use and urban management. Currently, a variety of composite indices are available to measure the sustainability at the local, national and international levels. However, the main conclusion drawn from the literature review is that they are too broad to be applied to assess local and micro level sustainability and no benchmark value for most of the indicators exists due to limited data availability and non-comparable data across countries. Mayer (2008, p. 280) advocates that by stating "as different as the indices may seem, many of them incorporate the same underlying data because of the small number of available sustainability datasets". Mori and Christodoulou (2011) also argue that this relative evaluation and comparison brings along biased assessments, as data only exists for some entities, which also means excluding many nations from evaluation and comparison. Thus, there is a need for developing an accurate and comprehensive micro-level urban ecosystem sustainability assessment method. In order to develop such a model, it is practical to adopt an approach that uses a method to utilise indicators for collecting data, designate certain threshold values or ranges, perform a comparative sustainability assessment via indices at the micro-level, and aggregate these assessment findings to the local level. Hereby, through this approach and model, it is possible to produce sufficient and reliable data to enable comparison at the local level, and provide useful results to inform the local planning, conservation and development decision-making process to secure sustainable ecosystems and urban futures. To advance research in this area, this study investigated the environmental impacts of an existing urban context by using a composite index with an aim to identify the interaction between urban ecosystems and human activities in the context of environmental sustainability. In this respect, this study developed a new comprehensive urban ecosystem sustainability assessment tool entitled the „Micro-level Urban-ecosystem Sustainability IndeX‟ (MUSIX). The MUSIX model is an indicator-based indexing model that investigates the factors affecting urban sustainability in a local context. The model outputs provide local and micro-level sustainability reporting guidance to help policy-making concerning environmental issues. A multi-method research approach, which is based on both quantitative analysis and qualitative analysis, was employed in the construction of the MUSIX model. First, a qualitative research was conducted through an interpretive and critical literature review in developing a theoretical framework and indicator selection. Afterwards, a quantitative research was conducted through statistical and spatial analyses in data collection, processing and model application. The MUSIX model was tested in four pilot study sites selected from the Gold Coast City, Queensland, Australia. The model results detected the sustainability performance of current urban settings referring to six main issues of urban development: (1) hydrology, (2) ecology, (3) pollution, (4) location, (5) design, and; (6) efficiency. For each category, a set of core indicators was assigned which are intended to: (1) benchmark the current situation, strengths and weaknesses, (2) evaluate the efficiency of implemented plans, and; (3) measure the progress towards sustainable development. While the indicator set of the model provided specific information about the environmental impacts in the area at the parcel scale, the composite index score provided general information about the sustainability of the area at the neighbourhood scale. Finally, in light of the model findings, integrated ecological planning strategies were developed to guide the preparation and assessment of development and local area plans in conjunction with the Gold Coast Planning Scheme, which establishes regulatory provisions to achieve ecological sustainability through the formulation of place codes, development codes, constraint codes and other assessment criteria that provide guidance for best practice development solutions. These relevant strategies can be summarised as follows: • Establishing hydrological conservation through sustainable stormwater management in order to preserve the Earth’s water cycle and aquatic ecosystems; • Providing ecological conservation through sustainable ecosystem management in order to protect biological diversity and maintain the integrity of natural ecosystems; • Improving environmental quality through developing pollution prevention regulations and policies in order to promote high quality water resources, clean air and enhanced ecosystem health; • Creating sustainable mobility and accessibility through designing better local services and walkable neighbourhoods in order to promote safe environments and healthy communities; • Sustainable design of urban environment through climate responsive design in order to increase the efficient use of solar energy to provide thermal comfort, and; • Use of renewable resources through creating efficient communities in order to provide long-term management of natural resources for the sustainability of future generations.
Resumo:
Collisions among trains and cars at road/rail level crossings (LXs) can have severe consequences such as high level of fatalities, injuries and significant financial losses. As communication and positioning technologies have significantly advanced, implementing vehicular ad hoc networks (VANETs) in the vicinity of unmanned LXs, generally LXs without barriers, is seen as an efficient and effective approach to mitigate or even eliminate collisions without imposing huge infrastructure costs. VANETs necessitate unique communication strategies, in which routing protocols take a prominent part in their scalability and overall performance, through finding optimised routes quickly and with low bandwidth overheads. This article studies a novel geo-multicast framework that incorporates a set of models for communication, message flow and geo-determination of endangered vehicles with a reliable receiver-based geo-multicast protocol to support cooperative level crossings (CLXs), which provide collision warnings to the endangered motorists facing road/rail LXs without barriers. This framework is designed and studied as part of a $5.5 m Government and industry funded project, entitled 'Intelligent-Transport-Systems to improve safety at road/rail crossings'. Combined simulation and experimental studies of the proposed geo-multicast framework have demonstrated promising outcomes as cooperative awareness messages provide actionable critical information to endangered drivers who are identified by CLXs.
Resumo:
Recent modelling of socio-economic costs by the Australian railway industry in 2010 has estimated the cost of level crossing accidents to exceed AU$116 million annually. To better understand causal factors that contribute to these accidents, the Cooperative Research Centre for Rail Innovation is running a project entitled Baseline Level Crossing Video. The project aims to improve the recording of level crossing safety data by developing an intelligent system capable of detecting near-miss incidents and capturing quantitative data around these incidents. To detect near-miss events at railway level crossings a video analytics module is being developed to analyse video footage obtained from forward-facing cameras installed on trains. This paper presents a vision base approach for the detection of these near-miss events. The video analytics module is comprised of object detectors and a rail detection algorithm, allowing the distance between a detected object and the rail to be determined. An existing publicly available Histograms of Oriented Gradients (HOG) based object detector algorithm is used to detect various types of vehicles in each video frame. As vehicles are usually seen from a sideway view from the cabin’s perspective, the results of the vehicle detector are verified using an algorithm that can detect the wheels of each detected vehicle. Rail detection is facilitated using a projective transformation of the video, such that the forward-facing view becomes a bird’s eye view. Line Segment Detector is employed as the feature extractor and a sliding window approach is developed to track a pair of rails. Localisation of the vehicles is done by projecting the results of the vehicle and rail detectors on the ground plane allowing the distance between the vehicle and rail to be calculated. The resultant vehicle positions and distance are logged to a database for further analysis. We present preliminary results regarding the performance of a prototype video analytics module on a data set of videos containing more than 30 different railway level crossings. The video data is captured from a journey of a train that has passed through these level crossings.
Resumo:
Electricity network investment and asset management require accurate estimation of future demand in energy consumption within specified service areas. For this purpose, simple models are typically developed to predict future trends in electricity consumption using various methods and assumptions. This paper presents a statistical model to predict electricity consumption in the residential sector at the Census Collection District (CCD) level over the state of New South Wales, Australia, based on spatial building and household characteristics. Residential household demographic and building data from the Australian Bureau of Statistics (ABS) and actual electricity consumption data from electricity companies are merged for 74 % of the 12,000 CCDs in the state. Eighty percent of the merged dataset is randomly set aside to establish the model using regression analysis, and the remaining 20 % is used to independently test the accuracy of model prediction against actual consumption. In 90 % of the cases, the predicted consumption is shown to be within 5 kWh per dwelling per day from actual values, with an overall state accuracy of -1.15 %. Given a future scenario with a shift in climate zone and a growth in population, the model is used to identify the geographical or service areas that are most likely to have increased electricity consumption. Such geographical representation can be of great benefit when assessing alternatives to the centralised generation of energy; having such a model gives a quantifiable method to selecting the 'most' appropriate system when a review or upgrade of the network infrastructure is required.
Resumo:
Outdoor air pollution is a killer. A recent report from the World Health Organization estimated that 3.7 million deaths per year are due to outdoor air pollution. Most of these deaths are in low and middle income countries, with China being the country that often springs to mind. However, Australia still has a relatively big air pollution problem with an estimated 3,000 deaths per year. Traffic pollution is the major contributor to urban air pollution in Australia. Extreme events, such dust storms, bushfires and the recent coal fire in Morwell, dramatically increase pollution levels (for days or weeks) and are also very hazardous to health. Australian governments in the last 30 years have committed to improving air quality, and policies have been discussed and implemented with the aim of creating cleaner air. One key policy measure is the National Environment Protection Measures for air quality. These set standards for six important outdoor pollutants. Their key goal is to create “ambient air quality that allows for the adequate protection of human health and wellbeing”.
Resumo:
A precise representation of the spatial distribution of hydrophobicity, hydrophilicity and charges on the molecular surface of proteins is critical for the understanding of the interaction with small molecules and larger systems. The representation of hydrophobicity is rarely done at atom-level, as this property is generally assigned to residues. A new methodology for the derivation of atomic hydrophobicity from any amino acid-based hydrophobicity scale was used to derive 8 sets of atomic hydrophobicities, one of which was used to generate the molecular surfaces for 35 proteins with convex structures, 5 of which, i.e., lysozyme, ribonuclease, hemoglobin, albumin and IgG, have been analyzed in more detail. Sets of the molecular surfaces of the model proteins have been constructed using spherical probes with increasingly large radii, from 1.4 to 20 A˚, followed by the quantification of (i) the surface hydrophobicity; (ii) their respective molecular surface areas, i.e., total, hydrophilic and hydrophobic area; and (iii) their relative densities, i.e., divided by the total molecular area; or specific densities, i.e., divided by property-specific area. Compared with the amino acid-based formalism, the atom-level description reveals molecular surfaces which (i) present an approximately two times more hydrophilic areas; with (ii) less extended, but between 2 to 5 times more intense hydrophilic patches; and (iii) 3 to 20 times more extended hydrophobic areas. The hydrophobic areas are also approximately 2 times more hydrophobicity-intense. This, more pronounced "leopard skin"-like, design of the protein molecular surface has been confirmed by comparing the results for a restricted set of homologous proteins, i.e., hemoglobins diverging by only one residue (Trp37). These results suggest that the representation of hydrophobicity on the protein molecular surfaces at atom-level resolution, coupled with the probing of the molecular surface at different geometric resolutions, can capture processes that are otherwise obscured to the amino acid-based formalism.
Resumo:
Porosity is one of the key parameters of the macroscopic structure of porous media, generally defined as the ratio of the free spaces occupied (by the volume of air) within the material to the total volume of the material. Porosity is determined by measuring skeletal volume and the envelope volume. Solid displacement method is one of the inexpensive and easy methods to determine the envelope volume of a sample with an irregular shape. In this method, generally glass beads are used as a solid due to their uniform size, compactness and fluidity properties. The smaller size of the glass beads means that they enter into the open pores which have a larger diameter than the glass beads. Although extensive research has been carried out on porosity determination using displacement method, no study exists which adequately reports micro-level observation of the sample during measurement. This study set out with the aim of assessing the accuracy of solid displacement method of bulk density measurement of dried foods by micro-level observation. Solid displacement method of porosity determination was conducted using a cylindrical vial (cylindrical plastic container) and 57 µm glass beads in order to measure the bulk density of apple slices at different moisture contents. A scanning electron microscope (SEM), a profilometer and ImageJ software were used to investigate the penetration of glass beads into the surface pores during the determination of the porosity of dried food. A helium pycnometer was used to measure the particle density of the sample. Results show that a significant number of pores were large enough to allow the glass beads to enter into the pores, thereby causing some erroneous results. It was also found that coating the dried sample with appropriate coating material prior to measurement can resolve this problem.
Resumo:
A three-dimensional linear, small deformation theory of elasticity solution by the direct method is developed for the free vibration of simply-supported, homogeneous, isotropic, thick rectangular plates. The solution is exact and involves determining a triply infinite sequence of eigenvalues from a doubly infinite set of closed form transcendental equations. As no restrictions are placed on the thickness variation of stresses or displacements, this formulation yields a triply infinite spectrum of frequencies, instead of only one doubly infinite spectrum by thin plate theory and three doubly infinite spectra by Mindlin's thick plate theory. Further, the present analysis yields symmetric thickness modes which neither of the approximate theories can identify. Some numerical results from the two approximate theories are compared with those from the present solution and some important conclusions regarding the effect of the assumptions made in the approximate theories are drawn. The thickness variations of stresses and displacements are also discussed. The analysis is readily extended for laminated plates of isotropic materials. Numerical results are also given for three-ply laminates, and are used to assess the accuracy of thin plate theory predictions for laminates. Extension to general lateral surface conditions and forced vibrations is indicated.