914 resultados para Rating Bridges
Resumo:
From one view of composition—let us call it the inspired or “Mozartian” view—musical compositions arrive fully formed in the mind of the composer and simply require transcription. In reality, however, it seems that very few people are so inspired, and composition is often more akin to a gradual clarification and refinement of partially formed ideas on the musical landscape. Particular landmarks in the compositional landscape tend to become clear before others, such that the incomplete piece is a patchwork of disconnected musical islands. An interactive evolutionary morphing system may provide some assistance for composers, to help build bridges between musical islands by generating hybrid musical transitions.
Resumo:
In 2009 the Australian Federal and State governments are expected to have spent some AU$30 billion procuring infrastructure projects. For governments with finite resources but many competing projects, formal capital rationing is achieved through use of Business Cases. These Business cases articulate the merits of investing in particular projects along with the estimated costs and risks of each project. Despite the sheer size and impact of infrastructure projects, there is very little research in Australia, or internationally, on the performance of these projects against Business Case assumptions when the decision to invest is made. If such assumptions (particularly cost assumptions) are not met, then there is serious potential for the misallocation of Australia’s finite financial resources. This research addresses this important gap in the literature by using combined quantitative and qualitative research methods, to examine the actual performance of 14 major Australian government infrastructure projects. The research findings are controversial as they challenge widely held perceptions of the effectiveness of certain infrastructure delivery practices. Despite this controversy, the research has had a significant impact on the field and has been described as ‘outstanding’ and ‘definitive’ (Alliancing Association of Australasia), "one of the first of its kind" (Infrastructure Partnerships of Australia) and "making a critical difference to infrastructure procurement" (Victorian Department of Treasury). The implications for practice of the research have been profound and included the withdrawal by Government of various infrastructure procurement guidelines, the formulation of new infrastructure policies by several state governments and the preparation of new infrastructure guidelines that substantially reflect the research findings. Building on the practical research, a more rigorous academic investigation focussed on the comparative cost uplift of various project delivery strategies was submitted to Australia’s premier academic management conference, the Australian and New Zealand Academy of Management (ANZAM) Annual Conference. This paper has been accepted for the 2010 ANZAM National Conference following a process of double blind peer review with reviewers rating the paper’s overall contribution as "Excellent" and "Good".
Resumo:
The availability of bridges is crucial to people’s daily life and national economy. Bridge health prediction plays an important role in bridge management because maintenance optimization is implemented based on prediction results of bridge deterioration. Conventional bridge deterioration models can be categorised into two groups, namely condition states models and structural reliability models. Optimal maintenance strategy should be carried out based on both condition states and structural reliability of a bridge. However, none of existing deterioration models considers both condition states and structural reliability. This study thus proposes a Dynamic Objective Oriented Bayesian Network (DOOBN) based method to overcome the limitations of the existing methods. This methodology has the ability to act upon as a flexible unifying tool, which can integrate a variety of approaches and information for better bridge deterioration prediction. Two demonstrative case studies are conducted to preliminarily justify the feasibility of the methodology
Resumo:
The case study 3 team viewed the mitigation of noise and air pollution generated in the transport corridor that borders the study site to be a paramount driver of the urban design solution. These key urban planning strategies were adopted: * Spatial separation from transport corridor pollution source. A linear green zone and environmental buffer was proposed adjacent to the transport corridor to mitigate the environmental noise and air quality impacts of the corridor, and to offer residents opportunities for recreation * Open space forming the key structural principle for neighbourhood design. A significant open space system underpins the planning and manages surface water flows. * Urban blocks running on east-west axis. The open space rationale emphasises an east-west pattern for local streets. Street alignment allows for predominantly north-south facing terrace type buildings which both face the street and overlook the green courtyard formed by the perimeter buildings. The results of the ESD assessment of the typologies conclude that the design will achieve good outcomes through: * Lower than average construction costs compared with other similar projects * Thermal comfort; A good balance between daylight access and solar gains is achieved * The energy rating achieved for the units is 8.5 stars.
Resumo:
While it is generally accepted in the learning and teaching literature that assessment is the single biggest influence on how students approach their learning, assessment methods within higher education are generally conservative and inflexible. Constrained by policy and accreditation requirements and the need for the explicit articulation of assessment standards for public accountability purposes, assessment tasks can fail to engage students or reflect the tasks students will face in the world of practice. Innovative assessment design can simultaneously deliver program objectives and active learning through a knowledge transfer process which increases student participation. This social constructivist view highlights that acquiring an understanding of assessment processes, criteria and standards needs active student participation. Within this context, a peer-assessed, weekly, assessment task was introduced in the first “serious” accounting subject offered as part of an undergraduate degree. The positive outcomes of this assessment innovation was that student failure rates declined 15%, tutorial participation increased fourfold, tutorial engagement increased six-fold and there was a 100% approval rating for the retention of the assessment task. In contributing to the core conference theme of “seismic” shifts within higher education, in stark contrast to the positive student response, staff-related issues of assessment conservatism and the necessity of meeting increasing research commitments, threatened the assessment task’s survival. These opposing forces to change have the potential to weaken the ability of higher education assessment arrangements to adequately serve either a new generation of students or the sector's community stakeholders.
Resumo:
While it is generally accepted in the learning and teaching literature that assessment is the single biggest influence on how students approach their learning, assessment methods within higher education are generally conservative and inflexible. Constrained by policy and accreditation requirements and the need for the explicit articulation of assessment standards for public accountability purposes, assessment tasks can fail to engage students or reflect the tasks students will face in the world of practice. Innovative assessment design can simultaneously deliver program objectives and active learning through a knowledge transfer process which increases student participation. This social constructivist view highlights that acquiring an understanding of assessment processes, criteria and standards needs active student participation. Within this context, a peer-assessed, weekly, assessment task was introduced in the first “serious” accounting subject offered as part of an undergraduate degree. The positive outcomes of this assessment innovation was that student failure rates declined 15%, tutorial participation increased fourfold, tutorial engagement increased six-fold and there was a 100% approval rating for the retention of the assessment task. In contributing to the core conference theme of “seismic” shifts within higher education, in stark contrast to the positive student response, staff-related issues of assessment conservatism and the necessity of meeting increasing research commitments, threatened the assessment task’s survival. These opposing forces to change have the potential to weaken the ability of higher education assessment arrangements to adequately serve either a new generation of students or the sector's community stakeholders.
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
The primary objective of the experiments reported here was to demonstrate the effects of opening up the design envelope for auditory alarms on the ability of people to learn the meanings of a set of alarms. Two sets of alarms were tested, one already extant and one newly-designed set for the same set of functions, designed according to a rationale set out by the authors aimed at increasing the heterogeneity of the alarm set and incorporating some well-established principles of alarm design. For both sets of alarms, a similarity-rating experiment was followed by a learning experiment. The results showed that the newly-designed set was judged to be more internally dissimilar, and easier to learn, than the extant set. The design rationale outlined in the paper is useful for design purposes in a variety of practical domains and shows how alarm designers, even at a relatively late stage in the design process, can improve the efficacy of an alarm set.
Resumo:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
Resumo:
The existing Collaborative Filtering (CF) technique that has been widely applied by e-commerce sites requires a large amount of ratings data to make meaningful recommendations. It is not directly applicable for recommending products that are not frequently purchased by users, such as cars and houses, as it is difficult to collect rating data for such products from the users. Many of the e-commerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user's query are retrieved and recommended to the user. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their online navigation behaviour. This paper proposes to integrate collaborative filtering and search-based techniques to provide personalized recommendations for infrequently purchased products. Two different techniques are proposed, namely CFRRobin and CFAg Query. Instead of using the target user's query to search for products as normal search based systems do, the CFRRobin technique uses the products in which the target user's neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAg Query technique uses the products that the user's neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAg Query perform better than the standard Collaborative Filtering (CF) and the Basic Search (BS) approaches, which are widely applied by the current e-commerce applications. The CFRRobin and CFAg Query approaches also outperform the e- isting query expansion (QE) technique that was proposed for recommending infrequently purchased products.
Resumo:
In response to the need to leverage private finance and the lack of competition in some parts of the Australian public sector infrastructure market, especially in the very large economic infrastructure sector procured using Pubic Private Partnerships, the Australian Federal government has demonstrated its desire to attract new sources of in-bound foreign direct investment (FDI). This paper aims to report on progress towards an investigation into the determinants of multinational contractors’ willingness to bid for Australian public sector major infrastructure projects. This research deploys Dunning’s eclectic theory for the first time in terms of in-bound FDI by multinational contractors into Australia. Elsewhere, the authors have developed Dunning’s principal hypothesis to suit the context of this research and to address a weakness arising in this hypothesis that is based on a nominal approach to the factors in Dunning's eclectic framework and which fails to speak to the relative explanatory power of these factors. In this paper, a first stage test of the authors' development of Dunning's hypothesis is presented by way of an initial review of secondary data vis-à-vis the selected sector (roads and bridges) in Australia (as the host location) and with respect to four selected home countries (China; Japan; Spain; and US). In doing so, the next stage in the research method concerning sampling and case studies is also further developed and described in this paper. In conclusion, the extent to which the initial review of secondary data suggests the relative importance of the factors in the eclectic framework is considered. It is noted that more robust conclusions are expected following the future planned stages of the research including primary data from the case studies and a global survey of the world’s largest contractors and which is briefly previewed. Finally, and beyond theoretical contributions expected from the overall approach taken to developing and testing Dunning’s framework, other expected contributions concerning research method and practical implications are mentioned.
Resumo:
In this paper, we describe the main processes and operations in mining industries and present a comprehensive survey of operations research methodologies that have been applied over the last several decades. The literature review is classified into four main categories: mine design; mine production; mine transportation; and mine evaluation. Mining design models are further separated according to two main mining methods: open-pit and underground. Moreover, mine production models are subcategorised into two groups: ore mining and coal mining. Mine transportation models are further partitioned in accordance with fleet management, truck haulage and train scheduling. Mine evaluation models are further subdivided into four clusters in terms of mining method selection, quality control, financial risks and environmental protection. The main characteristics of four Australian commercial mining software are addressed and compared. This paper bridges the gaps in the literature and motivates researchers to develop more applicable, realistic and comprehensive operations research models and solution techniques that are directly linked with mining industries.
Resumo:
At the international level, the higher education sector is currently being subjected to increased calls for public accountability and the current move by the OECD to rank universities based on the quality of their teaching and learning outcomes. At the national level, Australian universities and their teaching staff face numerous challenges including financial restrictions, increasing student numbers and the reality of an increasingly diverse student population. The Australian higher education response to these competing policy and accreditation demands focuses on precise explicit systems and procedures which are inflexible and conservative and which ignore the fact that assessment is the single biggest influence on how students approach their learning. By seriously neglecting the quality of student learning outcomes, assessment tasks are often failing to engage students or reflect the tasks students will face in the world of practice. Innovative assessment design, which includes new paradigms of student engagement and learning and pedagogically based technologies have the capacity to provide some measure of relief from these internal and external tensions by significantly enhancing the learning experience for an increasingly time-poor population of students. That is, the assessment process has the ability to deliver program objectives and active learning through a knowledge transfer process which increases student participation and engagement. This social constructivist view highlights the importance of peer review in assisting students to participate and collaborate as equal members of a community of scholars with both their peers and academic staff members. As a result of increasing the student’s desire to learn, peer review leads to more confident, independent and reflective learners who also become more skilled at making independent judgements of their own and others' work. Within this context, in Case Study One of this project, a summative, peer-assessed, weekly, assessment task was introduced in the first “serious” accounting subject offered as part of an undergraduate degree. The positive outcomes achieved included: student failure rates declined 15%; tutorial participation increased fourfold; tutorial engagement increased six-fold; and there was a 100% student-based approval rating for the retention of the assessment task. However, in stark contrast to the positive student response, staff issues related to the loss of research time associated with the administration of the peer-review process threatened its survival. This paper contributes to the core conference topics of new trends and experiences in undergraduate assessment education and in terms of innovative, on-line, learning and teaching practices, by elaborating the Case Study Two “solution” generated to this dilemma. At the heart of the resolution is an e-Learning, peer-review process conducted in conjunction with the University of Melbourne which seeks to both create a virtual sense of belonging and to efficiently and effectively meet academic learning objectives with minimum staff involvement. In outlining the significant level of success achieved, student-based qualitative and quantitative data will be highlighted along with staff views in a comparative analysis of the advantages and disadvantages to both students and staff of the staff-led, peer review process versus its on-line counterpart.
Resumo:
Recently an innovative composite panel system was developed, where a thin insulation layer was used externally between two plasterboards to improve the fire performance of light gauge cold-formed steel frame walls. In this research, finite-element thermal models of both the traditional light gauge cold-formed steel frame wall panels with cavity insulation and the new light gauge cold-formed steel frame composite wall panels were developed to simulate their thermal behaviour under standard and realistic fire conditions. Suitable apparent thermal properties of gypsum plasterboard, insulation materials and steel were proposed and used. The developed models were then validated by comparing their results with available fire test results. This article presents the details of the developed finite-element models of small-scale non-load-bearing light gauge cold-formed steel frame wall panels and the results of the thermal analysis. It has been shown that accurate finite-element models can be used to simulate the thermal behaviour of small-scale light gauge cold-formed steel frame walls with varying configurations of insulations and plasterboards. The numerical results show that the use of cavity insulation was detrimental to the fire rating of light gauge cold-formed steel frame walls, while the use of external insulation offered superior thermal protection to them. The effects of real fire conditions are also presented.
Resumo:
Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.