965 resultados para Appropriate Selection Processes Are Available For Choosing Hospitality Texts
Resumo:
Innovation processes are rarely smooth and disruptions often occur at transition points were one knowledge domain passes the technology on to another domain. At these transition points communication is a key component in assisting the smooth hand over of technologies. However for smooth transitions to occur we argue that appropriate structures have to be in place and boundary spanning activities need to be facilitated. This paper presents three case studies of innovation processes and the findings support the view that structures and boundary spanning are essential for smooth transitions. We have explained the need to pass primary responsibility between agents to successfully bring an innovation to market. We have also shown the need to combine knowledge through effective communication so that absorptive capacity is built in process throughout the organisation rather than in one or two key individuals.
Resumo:
Sourcing appropriate funding for the provision of new urban infrastructure has been a policy dilemma for governments around the world for decades. This is particularly relevant in high growth areas where new services are required to support swelling populations. The Australian infrastructure funding policy dilemmas are reflective of similar matters in many countries, particularly the United States of America, where infrastructure cost recovery policies have been in place since the 1970’s. There is an extensive body of both theoretical and empirical literature from these countries that discusses the passing on (to home buyers) of these infrastructure charges, and the corresponding impact on housing prices. The theoretical evidence is consistent in its findings that infrastructure charges are passed on to home buyers by way of higher house prices. The empirical evidence is also consistent in its findings, with “overshifting” of these charges evident in all models since the 1980’s, i.e. $1 infrastructure charge results in greater than $1 increase in house prices. However, despite over a dozen separate studies over two decades in the US on this topic, no empirical works have been carried out in Australia to test if similar shifting or overshifting occurs here. The purpose of this research is to conduct a preliminary analysis of the more recent models used in these US empirical studies in order to identify the key study area selection criteria and success factors. The paper concludes that many of the study area selection criteria are implicit rather than explicit. By collecting data across the models, some implicit criteria become apparent, whilst others remain elusive. This data will inform future research on whether an existing model can be adopted or adapted for use in Australia.
Resumo:
As civil infrastructures such as bridges age, there is a concern for safety and a need for cost-effective and reliable monitoring tool. Different diagnostic techniques are available nowadays for structural health monitoring (SHM) of bridges. Acoustic emission is one such technique with potential of predicting failure. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). AEtechnique involves recording the stress waves bymeans of sensors and subsequent analysis of the recorded signals,which then convey information about the nature of the source. AE can be used as a local SHM technique to monitor specific regions with visible presence of cracks or crack prone areas such as welded regions and joints with bolted connection or as a global technique to monitor the whole structure. Strength of AE technique lies in its ability to detect active crack activity, thus helping in prioritising maintenance work by helping focus on active cracks rather than dormant cracks. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. One is the generation of large amount of data during the testing; hence an effective data analysis and management is necessary, especially for long term monitoring uses. Complications also arise as a number of spurious sources can giveAEsignals, therefore, different source discrimination strategies are necessary to identify genuine signals from spurious ones. Another major challenge is the quantification of damage level by appropriate analysis of data. Intensity analysis using severity and historic indices as well as b-value analysis are some important methods and will be discussed and applied for analysis of laboratory experimental data in this paper.
Resumo:
Current conceptualizations of organizational processes consider them as internally optimized yet static systems. Still, turbulences in the contextual environment of a firm often lead to adaptation requirements that these processes are unable to fulfil. Based on a multiple case study of the core processes of two large organizations, we offer an extended conceptualisation of business processes as complex adaptive systems. This conceptualization can enable firms to optimise business processes by analysing operations in different contexts and by examining the complex interaction between external, contextual elements and internal agent schemata. From this analysis, we discuss how information technology can play a vital goal in achieving this goal by providing discovery, analysis, and automation support. We detail implications for research and practice.
Resumo:
Ocean processes are complex and have high variability in both time and space. Thus, ocean scientists must collect data over long time periods to obtain a synoptic view of ocean processes and resolve their spatiotemporal variability. One way to perform these persistent observations is to utilise an autonomous vehicle that can remain on deployment for long time periods. However, such vehicles are generally underactuated and slow moving. A challenge for persistent monitoring with these vehicles is dealing with currents while executing a prescribed path or mission. Here we present a path planning method for persistent monitoring that exploits ocean currents to increase navigational accuracy and reduce energy consumption.
Resumo:
The demand for Business Process Management (BPM) is rapidly rising and with that, the need for capable BPM professionals is also rising. Yet, only a very few structured BPM training/ education programs are available, across universities and professional trainers globally. The ‘lack of appropriate teaching resources’ has been identified as a critical issue for BPM educators in prior studies. Case-based teaching can be an effective means of educating future BPM professionals. A main reason is that cases create an authentic learning environment where the complexities and challenges of the ‘real world’ can be presented in a narrative enabling the students to develop crucial skills such as problem solving, analysis and creativity-within-constraints, and to apply the tools and techniques within a richer and real (or proxy to real) context. However, so far well documented BPM teaching cases are scarce. This article aims to contribute to address this gap by providing a comprehensive teaching case and teaching notes that facilitates the education of selected process improvement phases, namely identification, modelling, analysis, and improvement. The article is divided into three main parts: (i) Introductory teaching notes, (ii) The case narrative, and (iii) Student activities from the case and teaching notes.
Resumo:
Rapid urbanisation and resulting continuous increase in traffic has been recognised as key factors in the contribution of increased pollutant loads to urban stormwater and in turn to receiving waters. Urbanisation primarily increases anthropogenic activities and the percentage of impervious surfaces in urban areas. These processes are collectively responsible for urban stormwater pollution. In this regard, urban traffic and land use related activities have been recognised as the primary pollutant sources. This is primarily due to the generation of a range of key pollutants such as solids, heavy metals and PAHs. Appropriate treatment system design is the most viable approach to mitigate stormwater pollution. However, limited understanding of the pollutant process and transport pathways constrains effective treatment design. This highlights necessity for the detailed understanding of traffic and other land use related pollutants processes and pathways in relation to urban stormwater pollution. This study has created new knowledge in relation to pollutant processes and transport pathways encompassing atmospheric pollutants, atmospheric deposition and build-up on ground surfaces of traffic generated key pollutants. The research study was primarily based on in-depth experimental investigations. This thesis describes the extensive knowledge created relating to the processes of atmospheric pollutant build-up, atmospheric deposition and road surface build-up and establishing their relationships as a chain of processes. The analysis of atmospheric deposition revealed that both traffic and land use related sources contribute total suspended particulate matter (TSP) to the atmosphere. Traffic sources become dominant during weekdays whereas land use related sources become dominant during weekends due to the reduction in traffic sources. The analysis further concluded that atmospheric TSP, polycyclic aromatic hydrocarbons (PAHs) and heavy metals (HMs) concentrations are highly influenced by total average daily heavy duty traffic, traffic congestion and the fraction of commercial and industrial land uses. A set of mathematical equation were developed to predict TSP, PAHs and HMs concentrations in the atmosphere based on the influential traffic and land use related parameters. Dry deposition samples were collected for different antecedent dry days and wet deposition samples were collected immediately after rainfall events. The dry deposition was found to increase with the antecedent dry days and consisted of relatively coarser particles (greater than 1.4 ìm) when compared to wet deposition. The wet deposition showed a strong affinity to rainfall depth, but was not related to the antecedent dry period. It was also found that smaller size particles (less than 1.4 ìm) travel much longer distances from the source and deposit mainly with the wet deposition. Pollutants in wet deposition are less sensitive to the source characteristics compared to dry deposition. Atmospheric deposition of HMs is not directly influenced by land use but rather by proximity to high emission sources such as highways. Therefore, it is important to consider atmospheric deposition as a key pollutant source to urban stormwater in the vicinity of these types of sources. Build-up was analysed for five different particle size fractions, namely, <1 ìm, 1-75 ìm, 75-150 ìm, 150-300 ìm and >300 ìm for solids, PAHs and HMs. The outcomes of the study indicated that PAHs and HMs in the <75 ìm size fraction are generated mainly by traffic related activities whereas the > 150 ìm size fraction is generated by both traffic and land use related sources. Atmospheric deposition is an important source for HMs build-up on roads, whereas the contribution of PAHs from atmospheric sources is limited. A comprehensive approach was developed to predict traffic and other land use related pollutants in urban stormwater based on traffic and other land use characteristics. This approach primarily included the development of a set of mathematical equations to predict traffic generated pollutants by linking traffic and land use characteristics to stormwater quality through mathematical modelling. The outcomes of this research will contribute to the design of appropriate treatment systems to safeguard urban receiving water quality for future traffic growth scenarios. The „real world. application of knowledge generated was demonstrated through mathematical modelling of solids in urban stormwater, accounting for the variability in traffic and land use characteristics.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
Butterfly long-wavelength (L) photopigments are interesting for comparative studies of adaptive evolution because of the tremendous phenotypic variation that exists in their wavelength of peak absorbance (lambda(max) value). Here we present a comprehensive survey of L photopigment variation by measuring lambda(max) in 12 nymphalid and 1 riodinid species using epi-microspectrophotometry. Together with previous data, we find that L photopigment lambda(max) varies from 510-565 nm in 22 nymphalids, with an even broader 505- to 600-nm range in riodinids. We then surveyed the L opsin genes for which lambda(max) values are available as well as from related taxa and found 2 instances of L opsin gene duplication within nymphalids, in Hermeuptychia hermes and Amathusia phidippus, and 1 instance within riodinids, in the metalmark butterfly Apodemia mormo. Using maximum parsimony and maximum likelihood ancestral state reconstructions to map the evolution of spectral shifts within the L photopigments of nymphalids, we estimate the ancestral pigment had a lambda(max) = 540 nm +/- 10 nm standard error and that blueshifts in wavelength have occurred at least 4 times within the family. We used ancestral state reconstructions to investigate the importance of several amino acid substitutions (Ile17Met, Ala64Ser, Asn70Ser, and Ser137Ala) previously shown to have evolved under positive selection that are correlated with blue spectral shifts. These reconstructions suggest that the Ala64Ser substitution has indeed occurred along the newly identified blueshifted L photopigment lineages. Substitutions at the other 3 sites may also be involved in the functional diversification of L photopigments. Our data strongly suggest that there are limits to the evolution of L photopigment spectral shifts among species with only one L opsin gene and that opsin gene duplication broadens the potential range of lambda(max) values.
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
In a study of assuring learning in Australian Business Schools, 25 Teaching and Learning Associate Deans were interviewed to identify current issues in developing and measuring the quality of teaching and learning outcomes. Results indicate that for most institutions developing a perspective on graduate attributes and mapping assessments to measure outcomes across an entire program required knowledge creation and the building of new inclusive processes. Common elements of effective practice, namely those which offered consistently superior outcomes, included: inclusive processes; embedded graduate attributes throughout a program; alongside consistent and appropriate assessment. Results indicate that assurance of learning processes are proliferating nationally while quality of teaching and learning outcomes and in the processes for assuring it is increasing as a result.
Resumo:
This is the protocol for a review and there is no abstract. The objectives are as follows: Primary research objective To determine the effects of community wide, multi-strategic interventions upon community levels of physical activity. Secondary research objectives 1. To explore whether any effects of the intervention are different within and between populations, and whether these differences form an equity gradient. 2. To describe other health (e.g. cardiovascular disease morbidity) and behavioural effects (e.g. diet) where appropriate outcomes are available. 3. To explore the influence of context in the design, delivery, and outcomes of the interventions. 4. To explore the relationship between the number of components, duration, and effects of the interventions. 5. To highlight implications for further research and research methods to improve knowledge of the interventions in relation to the primary research objective.
Resumo:
Reliable communications is one of the major concerns in wireless sensor networks (WSNs). Multipath routing is an effective way to improve communication reliability in WSNs. However, most of existing multipath routing protocols for sensor networks are reactive and require dynamic route discovery. If there are many sensor nodes from a source to a destination, the route discovery process will create a long end-to-end transmission delay, which causes difficulties in some time-critical applications. To overcome this difficulty, the efficient route update and maintenance processes are proposed in this paper. It aims to limit the amount of routing overhead with two-tier routing architecture and introduce the combination of piggyback and trigger update to replace the periodic update process, which is the main source of unnecessary routing overhead. Simulations are carried out to demonstrate the effectiveness of the proposed processes in improvement of total amount of routing overhead over existing popular routing protocols.
Resumo:
A pressing cost issue facing construction is the procurement of off-site pre-manufactured assemblies. In order to encourage Australian adoption of off-site manufacture (OSM), a new approach to underlying processes is required. The advent of object oriented digital models for construction design assumes intelligent use of data. However, the construction production system relies on traditional methods and data sources and is expected to benefit from the application of well-established business process management techniques. The integration of the old and new data sources allows for the development of business process models which, by capturing typical construction processes involving OSM, provides insights into such processes. This integrative approach is the foundation of research into the use of OSM to increase construction productivity in Australia. The purpose of this study is to develop business process models capturing the procurement, resources and information flow of construction projects. For each stage of the construction value chain, a number of sub-processes are identified. Business Process Modelling Notation (BPMN), a mainstream business process modelling standard, is used to create base-line generic construction process models. These models identify OSM decision-making points that could provide cost reductions in procurement workflow and management systems. This paper reports on phase one of an on-going research aiming to develop a proto-type workflow application that can provide semi-automated support to construction processes involving OSM and assist in decision-making in the adoption of OSM thus contributing to a sustainable built environment.
Resumo:
The management and improvement of business processes are a core topic of the information systems discipline. The persistent demand in corporations within all industry sectors for increased operational efficiency and innovation, an emerging set of established and evaluated methods, tools, and techniques as well as the quickly growing body of academic and professional knowledge are indicative for the standing that Business Process Management (BPM) has nowadays. During the last decades, intensive research has been conducted with respect to the design, implementation, execution, and monitoring of business processes. Comparatively low attention, however, has been paid to questions related to organizational issues such as the adoption, usage, implications, and overall success of BPM approaches, technologies, and initiatives. This research gap motivated us to edit a corresponding special focus issue for the journal BISE/WIRTSCHAFTSINFORMATIK. We are happy that we are able to present a selection of three research papers and a state-of-the-art paper in the scientific section of the issue at hand. As these papers differ in the topics they investigate, the research method they apply, and the theoretical foundations they build on, the diversity within the BPM field becomes evident. The academic papers are complemented by an interview with Phil Gilbert, IBM’s Vice President for Business Process and Decision Management, who reflects on the relationship between business processes and the data flowing through them, the need to establish a process context for decision making, and the calibration of BPM efforts toward executives who see processes as a means to an end, rather than a first-order concept in its own right.