919 resultados para Superiority and Inferiority Multi-criteria Ranking (SIR) Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dwell times at stations and inter-station run times are the two major operational parameters to maintain train schedule in railway service. Current practices on dwell-time and run-time control are that they are only optimal with respect to certain nominal traffic conditions, but not necessarily the current service demand. The advantages of dwell-time and run-time control on trains are therefore not fully considered. The application of a dynamic programming approach, with the aid of an event-based model, to devise an optimal set of dwell times and run times for trains under given operational constraints over a regional level is presented. Since train operation is interactive and of multi-attributes, dwell-time and run-time coordination among trains is a multi-dimensional problem. The computational demand on devising trains' instructions, a prime concern in real-time applications, is excessively high. To properly reduce the computational demand in the provision of appropriate dwell times and run times for trains, a DC railway line is divided into a number of regions and each region is controlled by a dwell- time and run-time controller. The performance and feasibility of the controller in formulating the dwell-time and run-time solutions for real-time applications are demonstrated through simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background / context: The ALTC WIL Scoping Study identified a need to develop innovative assessment methods for work integrated learning (WIL) that encourage reflection and integration of theory and practice within the constraints that result from the level of engagement of workplace supervisors and the ability of academic supervisors to become involved in the workplace. Aims: The aim of this paper is to examine how poster presentations can be used to authentically assess student learning during WIL. Method / Approach: The paper uses a case study approach to evaluate the use of poster presentations for assessment in two internship units at the Queensland University of Technology. The first is a unit in the Faculty of Business where students majoring in advertising, marketing and public relations are placed in a variety of organisations. The second unit is a law unit where students complete placements in government legal offices. Results / Discussion: While poster presentations are commonly used for assessment in the sciences, they are an innovative approach to assessment in the humanities. This paper argues that posters are one way that universities can overcome the substantial challenges of assessing work integrated learning. The two units involved in the case study adopt different approaches to the poster assessment; the Business unit is non-graded and the poster assessment task requires students to reflect on their learning during the internship. The Law unit is graded and requires students to present on a research topic that relates to their internship. In both units the posters were presented during a poster showcase which was attended by students, workplace supervisors and members of faculty. The paper evaluates the benefits of poster presentations for students, workplace supervisors and faculty and proposes some criteria for poster assessment in WIL. Conclusions / Implications: The paper concludes that posters can effectively and authentically assess various learning outcomes in WIL in different disciplines while at the same time offering a means to engage workplace supervisors with academic staff and other students and supervisors participating in the unit. Posters have the ability to demonstrate reflection in learning and are an excellent demonstration of experiential learning and assessing authentically. Keywords: Work integrated learning, assessment, poster presentations, industry engagement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose–The growing debate in the literature indicates that the initiative to implement Knowledge Based Urban Development (KBUD) approaches in urban development process is neither simple nor quick. Many research efforts has therefore, been put forward to the development of appropriate KBUD framework and KBUD practical approaches. But this has lead to a fragmented and incoherent methodological approach. This paper outlines and compares a few most popular KBUD frameworks selected from the literature. It aims to identify some key and common features in the effort to achieve a unified method of KBUD framework. Design/methodology/approach–This paper reviews, examines and identifies various popular KBUD frameworks discussed in the literature from urban planners’ viewpoint. It employs a content analysis technique i.e. a research tool used to determine the presence of certain words or concepts within texts or sets of texts. Originality/value–The paper reports on the key and common features of a few selected most popular KBUD frameworks. The synthesis of the results is based from a perspective of urban planners. The findings which encompass a new KBUD framework incorporating the key and common features will be valuable in setting a platform to achieve a unified method of KBUD. Practical implications –The discussion and results presented in this paper should be significant to researchers and practitioners and to any cities and countries that are aiming for KBUD. Keywords – Knowledge based urban development, Knowledge based urban development framework, Urban development and knowledge economy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several key issues need to be resolved before an efficient and reproducible Agrobacterium-mediated sugarcane transformation method can be developed for a wider range of sugarcane cultivars. These include loss of morphogenetic potential in sugarcane cells after Agrobacterium-mediated transformation, effect of exposure to abiotic stresses during in vitro selection, and most importantly the hypersensitive cell death response of sugarcane (and other nonhost plants) to Agrobacterium tumefaciens. Eight sugarcane cultivars (Q117, Q151, Q177, Q200, Q208, KQ228, QS94-2329, and QS94-2174) were evaluated for loss of morphogenetic potential in response to the age of the culture, exposure to Agrobacterium strains, and exposure to abiotic stresses during selection. Corresponding changes in the polyamine profiles of these cultures were also assessed. Strategies were then designed to minimize the negative effects of these factors on the cell survival and callus proliferation following Agrobacterium-mediated transformation. Some of these strategies, including the use of cell death protector genes and regulation of intracellular polyamine levels, will be discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the genealogies of bio-power that cut across punitive state interventions aimed at regulating or normalising several distinctive ‘problem’ or ‘suspect’ deviant populations, such as state wards, non-lawful citizens and Indigenous youth. It begins by making some general comments about the theoretical approach to bio-power taken in this paper. It then outlines the distinctive features of bio-power in Australia and how these intersected with the emergence of penal welfarism to govern the unruly, unchaste, unlawful, and the primitive. The paper draws on three examples to illustrate the argument – the gargantuan criminalisation rates of Aboriginal youth, the history of incarcerating state wards in state institutions, and the mandatory detention of unlawful non-citizens and their children. The construction of Indigenous people as a dangerous presence, alongside the construction of the unruly neglected children of the colony — the larrikin descendants of convicts as necessitating special regimes of internal controls and institutions, found a counterpart in the racial and other exclusionary criteria operating through immigration controls for much of the twentieth century. In each case the problem child or population was expelled from the social body through forms of bio-power, rationalised as strengthening, protecting or purifying the Australian population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable infrastructure assets impact significantly on quality of life and provide a stable foundation for economic growth and competitiveness. Decisions about the way assets are managed are of utmost importance in achieving this. Timely renewal of infrastructure assets supports reliability and maximum utilisation of infrastructure and enables business and community to grow and prosper. This research initially examined a framework for asset management decisions and then focused on asset renewal optimisation and renewal engineering optimisation in depth. This study had four primary objectives. The first was to develop a new Asset Management Decision Framework (AMDF) for identifying and classifying asset management decisions. The AMDF was developed by applying multi-criteria decision theory, classical management theory and life cycle management. The AMDF is an original and innovative contribution to asset management in that: · it is the first framework to provide guidance for developing asset management decision criteria based on fundamental business objectives; · it is the first framework to provide a decision context identification and analysis process for asset management decisions; and · it is the only comprehensive listing of asset management decision types developed from first principles. The second objective of this research was to develop a novel multi-attribute Asset Renewal Decision Model (ARDM) that takes account of financial, customer service, health and safety, environmental and socio-economic objectives. The unique feature of this ARDM is that it is the only model to optimise timing of asset renewal with respect to fundamental business objectives. The third objective of this research was to develop a novel Renewal Engineering Decision Model (REDM) that uses multiple criteria to determine the optimal timing for renewal engineering. The unique features of this model are that: · it is a novel extension to existing real options valuation models in that it uses overall utility rather than present value of cash flows to model engineering value; and · it is the only REDM that optimises timing of renewal engineering with respect to fundamental business objectives; The final objective was to develop and validate an Asset Renewal Engineering Philosophy (AREP) consisting of three principles of asset renewal engineering. The principles were validated using a novel application of real options theory. The AREP is the only renewal engineering philosophy in existence. The original contributions of this research are expected to enrich the body of knowledge in asset management through effectively addressing the need for an asset management decision framework, asset renewal and renewal engineering optimisation based on fundamental business objectives and a novel renewal engineering philosophy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In asset intensive industries such as mining, oil & gas, utilities etc. most of the capital expenditure happens on acquiring engineering assets. Process of acquiring assets is called as “Procurement” or “Acquisition”. An asset procurement decision should be taken in consideration with the installation, commissioning, operational, maintenance and disposal needs of an asset or spare. However, such cross-functional collaboration and communication does not appear to happen between engineering, maintenance, warehousing and procurement functions in many asset intensive industries. Acquisition planning and execution are two distinct parts of asset acquisition process. Acquisition planning or procurement planning is responsible for determining exactly what is required to be purchased. It is important that an asset acquisition decision is the result of cross-functional decision making process. An acquisition decision leads to a formal purchase order. Most costly asset decisions occur even before they are acquired. Therefore, acquisition decision should be an outcome of an integrated planning & decision making process. Asset intensive organizations both, Government and non Government in Australia spent AUD 102.5 Billion on asset acquisition in year 2008-09. There is widespread evidence of many assets and spare not being used or utilized and in the end are written off. This clearly shows that many organizations end up buying assets or spares which were not required or non-conforming to the needs of user functions. It is due the fact that strategic and software driven procurement process do not consider all the requirements from various functions within the organization which contribute to the operation and maintenance of the asset over its life cycle. There is a lot of research done on how to implement an effective procurement process. There are numerous software solutions available for executing a procurement process. However, not much research is done on how to arrive at a cross functional procurement planning process. It is also important to link procurement planning process to procurement execution process. This research will discuss ““Acquisition Engineering Model” (AEM) framework, which aims at assisting acquisition decision making based on various criteria to satisfy cross-functional organizational requirements. Acquisition Engineering Model (AEM) will consider inputs from corporate asset management strategy, production management, maintenance management, warehousing, finance and HSE. Therefore, it is essential that the multi-criteria driven acquisition planning process is carried out and its output is fed to the asset acquisition (procurement execution) process. An effective procurement decision making framework to perform acquisition planning which considers various functional criteria will be discussed in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organoclays were synthesised through ion exchange of a single surfactant for sodium ions, and characterised by a range of method including X-ray diffraction (XRD), BET, X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FT-IR), and transmission electron microscopy (TEM). The change in surface properties of montmorillonite and organoclays intercalated with the surfactant, tetradecyltrimethylammonium bromide (TDTMA) were determined using XRD through the change in basal spacing and the expansion occurred by the adsorbed p-nitrophenol. The changes of interlayer spacing were observed in TEM. In addition, the surface measurement such as specific surface area and pore volume was measured and calculated using BET method, this suggested the loaded surfactant is highly important to determine the sorption mechanism onto organoclays. The collected results of XPS provided the chemical composition of montmorillonite and organoclays, and the high-resolution XPS spectra offered the chemical states of prepared organoclays with binding energy. Using TGA and FT-IR, the confirmation of intercalated surfactant was investigated. The collected data from various techniques enable an understanding of the changes in structure and surface properties. This study is of importance to provide mechanisms for the adsorption of organic molecules, especially in contaminated environmental sites and polluted waters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). The paper begins with an update on a key development in a new early/first-order procurement decision making model that deploys production cost/benefit theory and theories concerning transaction costs from the New Institutional Economics, in order to identify a procurement mode that is likely to deliver the best ratio of production costs and transaction costs to production benefits, and therefore deliver superior VfM relative to alternative procurement modes. In doing so, the new procurement model is also able to address the uncertainty concerning the relative merits of Public-Private Partnerships (PPP) and non-PPP procurement approaches. The main aim of the paper is to develop competition as a dependent variable/proxy for VfM and a hypothesis (overarching proposition), as well as developing a research method to test the new procurement model. Competition reflects both production costs and benefits (absolute level of competition) and transaction costs (level of realised competition) and is a key proxy for VfM. Using competition as a proxy for VfM, the overarching proposition is given as: When the actual procurement mode matches the predicted (theoretical) procurement mode (informed by the new procurement model), then actual competition is expected to match potential competition (based on actual capacity). To collect data to test this proposition, the research method that is developed in this paper combines a survey and case study approach. More specifically, data collection instruments for the surveys to collect data on actual procurement, actual competition and potential competition are outlined. Finally, plans for analysing this survey data are briefly mentioned, along with noting the planned use of analytical pattern matching in deploying the new procurement model and in order to develop the predicted (theoretical) procurement mode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigates home literacy education practices of Taiwanese families in Australia. As Taiwanese immigrants represent the largest ¡°Chinese Australian¡± subgroup to have settled in the state of Queensland, teachers in this state often face the challenges of cultural differences between Australian schools and Taiwanese homes. Extensive work by previous researchers suggests that understanding the cultural and linguistic differences that influence how an immigrant child views and interacts with his/her environment is a possible way to minimise the challenges. Cultural practices start from infancy and at home. Therefore, this study is focused on young children who are around the age of four to five. It is a study that examines the form of literacy education that is enacted and valued by Taiwanese parents in Australia. Specifically, this study analyses ¡°what literacy knowledge and skill is taught at home?¡±, ¡°how is it taught?¡± and ¡°why is it taught?¡± The study is framed in Pierre Bourdieu.s theory of social practice that defines literacy from a sociological perspective. The aim is to understand the practices through which literacy is taught in the Taiwanese homes. Practices of literacy education are culturally embedded. Accordingly, the study shows the culturally specialised ways of learning and knowing that are enacted in the study homes. The study entailed four case studies that draw on: observations and recording of the interactions between the study parent and child in their literacy events; interviews and dialogues with the parents involved; and a collection of photographs of the children.s linguistic resources and artefacts. The methodological arguments and design addressed the complexity of home literacy education where Taiwanese parents raise children in their own cultural ways while adapting to a new country in an immigrant context. In other words, the methodology not only involves cultural practices, but also involves change and continuity in home literacy practices. Bernstein.s theory of pedagogic discourse was used to undertake a detailed analysis of parents. selection and organisation of content for home literacy education, and the evaluative criteria they established for the selected literacy knowledge and skill. This analysis showed how parents selected and controlled the interactions in their child.s literacy learning. Bernstein.s theory of pedagogic discourse was used also to analyse change and continuity in home literacy practice, specifically, the concepts of ¡°classification¡± and ¡°framing¡±. The design of this study aimed to gain an understanding of parents. literacy teaching in an immigrant context. The study found that parents tended to value and enact traditional practices, yet most of the parents were also searching for innovative ideas for their adult-structured learning. Home literacy education of Taiwanese families in this study was found to be complex, multi-faceted and influenced in an ongoing way by external factors. Implications for educators and recommendations for future study are provided. The findings of this study offer early childhood teachers in Australia understandings that will help them build knowledge about home literacy education of Taiwanese Australian families.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates the application of two advanced optimization methods for solving active flow control (AFC) device shape design problem and compares their optimization efficiency in terms of computational cost and design quality. The first optimization method uses hierarchical asynchronous parallel multi-objective evolutionary algorithm and the second uses hybridized evolutionary algorithm with Nash-Game strategies (Hybrid-Game). Both optimization methods are based on a canonical evolution strategy and incorporate the concepts of parallel computing and asynchronous evaluation. One type of AFC device named shock control bump (SCB) is considered and applied to a natural laminar flow (NLF) aerofoil. The concept of SCB is used to decelerate supersonic flow on suction/pressure side of transonic aerofoil that leads to a delay of shock occurrence. Such active flow technique reduces total drag at transonic speeds which is of special interest to commercial aircraft. Numerical results show that the Hybrid-Game helps an EA to accelerate optimization process. From the practical point of view, applying a SCB on the suction and pressure sides significantly reduces transonic total drag and improves lift-to-drag (L/D) value when compared to the baseline design.