198 resultados para Production lot-scheduling models
Resumo:
A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of Seven published/submitted papers and one poster presentation, of which five have been published and the other two are under review. This project is financially supported by the QUTPRA Grant. The twenty-first century started with the resurrection of lignocellulosic biomass as a potential substitute for petrochemicals. Petrochemicals, which enjoyed the sustainable economic growth during the past century, have begun to reach or have reached their peak. The world energy situation is complicated by political uncertainty and by the environmental impact associated with petrochemical import and usage. In particular, greenhouse gasses and toxic emissions produced by petrochemicals have been implicated as a significant cause of climate changes. Lignocellulosic biomass (e.g. sugarcane biomass and bagasse), which potentially enjoys a more abundant, widely distributed, and cost-effective resource base, can play an indispensible role in the paradigm transition from fossil-based to carbohydrate-based economy. Poly(3-hydroxybutyrate), PHB has attracted much commercial interest as a plastic and biodegradable material because some its physical properties are similar to those of polypropylene (PP), even though the two polymers have quite different chemical structures. PHB exhibits a high degree of crystallinity, has a high melting point of approximately 180°C, and most importantly, unlike PP, PHB is rapidly biodegradable. Two major factors which currently inhibit the widespread use of PHB are its high cost and poor mechanical properties. The production costs of PHB are significantly higher than for plastics produced from petrochemical resources (e.g. PP costs $US1 kg-1, whereas PHB costs $US8 kg-1), and its stiff and brittle nature makes processing difficult and impedes its ability to handle high impact. Lignin, together with cellulose and hemicellulose, are the three main components of every lignocellulosic biomass. It is a natural polymer occurring in the plant cell wall. Lignin, after cellulose, is the most abundant polymer in nature. It is extracted mainly as a by-product in the pulp and paper industry. Although, traditionally lignin is burnt in industry for energy, it has a lot of value-add properties. Lignin, which to date has not been exploited, is an amorphous polymer with hydrophobic behaviour. These make it a good candidate for blending with PHB and technically, blending can be a viable solution for price and reduction and enhance production properties. Theoretically, lignin and PHB affect the physiochemical properties of each other when they become miscible in a composite. A comprehensive study on structural, thermal, rheological and environmental properties of lignin/PHB blends together with neat lignin and PHB is the targeted scope of this thesis. An introduction to this research, including a description of the research problem, a literature review and an account of the research progress linking the research papers is presented in Chapter 1. In this research, lignin was obtained from bagasse through extraction with sodium hydroxide. A novel two-step pH precipitation procedure was used to recover soda lignin with the purity of 96.3 wt% from the black liquor (i.e. the spent sodium hydroxide solution). The precipitation process is presented in Chapter 2. A sequential solvent extraction process was used to fractionate the soda lignin into three fractions. These fractions, together with the soda lignin, were characterised to determine elemental composition, purity, carbohydrate content, molecular weight, and functional group content. The thermal properties of the lignins were also determined. The results are presented and discussed in Chapter 2. On the basis of the type and quantity of functional groups, attempts were made to identify potential applications for each of the individual lignins. As an addendum to the general section on the development of composite materials of lignin, which includes Chapters 1 and 2, studies on the kinetics of bagasse thermal degradation are presented in Appendix 1. The work showed that distinct stages of mass losses depend on residual sucrose. As the development of value-added products from lignin will improve the economics of cellulosic ethanol, a review on lignin applications, which included lignin/PHB composites, is presented in Appendix 2. Chapters 3, 4 and 5 are dedicated to investigations of the properties of soda lignin/PHB composites. Chapter 3 reports on the thermal stability and miscibility of the blends. Although the addition of soda lignin shifts the onset of PHB decomposition to lower temperatures, the lignin/PHB blends are thermally more stable over a wider temperature range. The results from the thermal study also indicated that blends containing up to 40 wt% soda lignin were miscible. The Tg data for these blends fitted nicely to the Gordon-Taylor and Kwei models. Fourier transform infrared spectroscopy (FT-IR) evaluation showed that the miscibility of the blends was because of specific hydrogen bonding (and similar interactions) between reactive phenolic hydroxyl groups of lignin and the carbonyl group of PHB. The thermophysical and rheological properties of soda lignin/PHB blends are presented in Chapter 4. In this chapter, the kinetics of thermal degradation of the blends is studied using thermogravimetric analysis (TGA). This preliminary investigation is limited to the processing temperature of blend manufacturing. Of significance in the study, is the drop in the apparent energy of activation, Ea from 112 kJmol-1 for pure PHB to half that value for blends. This means that the addition of lignin to PHB reduces the thermal stability of PHB, and that the comparative reduced weight loss observed in the TGA data is associated with the slower rate of lignin degradation in the composite. The Tg of PHB, as well as its melting temperature, melting enthalpy, crystallinity and melting point decrease with increase in lignin content. Results from the rheological investigation showed that at low lignin content (.30 wt%), lignin acts as a plasticiser for PHB, while at high lignin content it acts as a filler. Chapter 5 is dedicated to the environmental study of soda lignin/PHB blends. The biodegradability of lignin/PHB blends is compared to that of PHB using the standard soil burial test. To obtain acceptable biodegradation data, samples were buried for 12 months under controlled conditions. Gravimetric analysis, TGA, optical microscopy, scanning electron microscopy (SEM), differential scanning calorimetry (DSC), FT-IR, and X-ray photoelectron spectroscopy (XPS) were used in the study. The results clearly demonstrated that lignin retards the biodegradation of PHB, and that the miscible blends were more resistant to degradation compared to the immiscible blends. To obtain an understanding between the structure of lignin and the properties of the blends, a methanol-soluble lignin, which contains 3× less phenolic hydroxyl group that its parent soda lignin used in preparing blends for the work reported in Chapters 3 and 4, was blended with PHB and the properties of the blends investigated. The results are reported in Chapter 6. At up to 40 wt% methanolsoluble lignin, the experimental data fitted the Gordon-Taylor and Kwei models, similar to the results obtained soda lignin-based blends. However, the values obtained for the interactive parameters for the methanol-soluble lignin blends were slightly lower than the blends obtained with soda lignin indicating weaker association between methanol-soluble lignin and PHB. FT-IR data confirmed that hydrogen bonding is the main interactive force between the reactive functional groups of lignin and the carbonyl group of PHB. In summary, the structural differences existing between the two lignins did not manifest itself in the properties of their blends.
Resumo:
Independent television production is recognised for its capacity to generate new kinds of program content, as well as deliver innovation in formats. Globally, the television industry is entering into the post-broadcasting era where audiences are fragmented and content is distributed across multiple platforms. The effects of this convergence are now being felt in China, as it both challenges old statist models and presents new opportunities for content innovation. This thesis discusses the status of independent production in China, making relevant comparisons with independent production in other countries. Independent television production has become an important element in the reform of broadcasting in China in the past decade. The first independent TV production company was registered officially in 1994. While there are now over 4000 independent companies, the term „independent. does not necessarily constitute autonomy. The question the thesis addresses is: what is the status and nature of independence in China? Is it an appropriate term to use to describe the changing environment, or is it a misnomer? The thesis argues that Chinese independents operate alongside the mainstream state-owned system; they are „dependent. on the mainstream. Therefore independent television in China is a relative term. By looking at several companies in Beijing, mainly in entertainment, TV drama and animation, the thesis shows how the sector is injecting fresh ideas into the marketplace and how it plays an important role in improving innovation in many aspects of the television industry. The thesis shows how independent television companies in China are looking to protect their property rights. It demonstrates that far from being at the cutting edge, independents are reliant on a system that has many inbuilt structural problems. The thesis outlines many of the challenges facing 'independents'.
Resumo:
This article presents a two-stage analytical framework that integrates ecological crop (animal) growth and economic frontier production models to analyse the productive efficiency of crop (animal) production systems. The ecological crop (animal) growth model estimates "potential" output levels given the genetic characteristics of crops (animals) and the physical conditions of locations where the crops (animals) are grown (reared). The economic frontier production model estimates "best practice" production levels, taking into account economic, institutional and social factors that cause farm and spatial heterogeneity. In the first stage, both ecological crop growth and economic frontier production models are estimated to calculate three measures of productive efficiency: (1) technical efficiency, as the ratio of actual to "best practice" output levels; (2) agronomic efficiency, as the ratio of actual to "potential" output levels; and (3) agro-economic efficiency, as the ratio of "best practice" to "potential" output levels. Also in the first stage, the economic frontier production model identifies factors that determine technical efficiency. In the second stage, agro-economic efficiency is analysed econometrically in relation to economic, institutional and social factors that cause farm and spatial heterogeneity. The proposed framework has several important advantages in comparison with existing proposals. Firstly, it allows the systematic incorporation of all physical, economic, institutional and social factors that cause farm and spatial heterogeneity in analysing the productive performance of crop and animal production systems. Secondly, the location-specific physical factors are not modelled symmetrically as other economic inputs of production. Thirdly, climate change and technological advancements in crop and animal sciences can be modelled in a "forward-looking" manner. Fourthly, knowledge in agronomy and data from experimental studies can be utilised for socio-economic policy analysis. The proposed framework can be easily applied in empirical studies due to the current availability of ecological crop (animal) growth models, farm or secondary data, and econometric software packages. The article highlights several directions of empirical studies that researchers may pursue in the future.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
In this paper, we describe the main processes and operations in mining industries and present a comprehensive survey of operations research methodologies that have been applied over the last several decades. The literature review is classified into four main categories: mine design; mine production; mine transportation; and mine evaluation. Mining design models are further separated according to two main mining methods: open-pit and underground. Moreover, mine production models are subcategorised into two groups: ore mining and coal mining. Mine transportation models are further partitioned in accordance with fleet management, truck haulage and train scheduling. Mine evaluation models are further subdivided into four clusters in terms of mining method selection, quality control, financial risks and environmental protection. The main characteristics of four Australian commercial mining software are addressed and compared. This paper bridges the gaps in the literature and motivates researchers to develop more applicable, realistic and comprehensive operations research models and solution techniques that are directly linked with mining industries.
Resumo:
In this paper, No-Wait, No-Buffer, Limited-Buffer, and Infinite-Buffer conditions for the flow-shop problem (FSP) have been investigated. These four different buffer conditions have been combined to generate a new class of scheduling problem, which is significant for modelling many real-world scheduling problems. A new heuristic algorithm is developed to solve this strongly NP-hard problem. Detailed numerical implementations have been analysed and promising results have been achieved.
Resumo:
The question of under what conditions conceptual representation is compositional remains debatable within cognitive science. This paper proposes a well developed mathematical apparatus for a probabilistic representation of concepts, drawing upon methods developed in quantum theory to propose a formal test that can determine whether a specific conceptual combination is compositional, or not. This test examines a joint probability distribution modeling the combination, asking whether or not it is factorizable. Empirical studies indicate that some combinations should be considered non-compositionally.
Resumo:
Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.
Resumo:
Articular cartilage is a highly resilient tissue located at the ends of long bones. It has a zonal structure, which has functional significance in load-bearing. Cartilage does not spontaneously heal itself when damaged, and untreated cartilage lesions or age-related wear often lead to osteoarthritis (OA). OA is a degenerative condition that is highly prevalent, age-associated, and significantly affects patient mobility and quality of life. There is no cure for OA, and patients usually resort to replacing the biological joint with an artificial prosthesis. An alternative approach is to dynamically regenerate damaged or diseased cartilage through cartilage tissue engineering, where cells, materials, and stimuli are combined to form new cartilage. However, despite extensive research, major limitations remain that have prevented the wide-spread application of tissue-engineered cartilage. Critically, there is a dearth of information on whether autologous chondrocytes obtained from OA patients can be used to successfully generate cartilage tissues with structural hierarchy typically found in normal articular cartilage. I aim to address these limitations in this thesis by showing that chondrocyte subpopulations isolated from macroscopically normal areas of the cartilage can be used to engineer stratified cartilage tissues and that compressive loading plays an important role in zone-dependent biosynthesis of these chondrocytes. I first demonstrate that chondrocyte subpopulations from the superficial (S) and middle/deep (MD) zones of OA cartilage are responsive to compressive stimulation in vitro, and that the effect of compression on construct quality is zone-dependent. I also show that compressive stimulation can influence pericelluar matrix production, matrix metalloproteinase secretion, and cytokine expression in zonal chondrocytes in an alginate hydrogel model. Subsequently, I focus on recreating the zonal structure by forming layered constructs using the alginate-released chondrocyte (ARC) method either with or without polymeric scaffolds. Resulting zonal ARC constructs had hyaline morphology, and expressed cartilage matrix molecules such as proteoglycans and collagen type II in both scaffold-free and scaffold-based approaches. Overall, my findings demonstrate that chondrocyte subpopulations obtained from OA joints respond sensitively to compressive stimulation, and are able to form cartilaginous constructs with stratified organization similar to native cartilage using the scaffold-free and scaffold-based ARC technique. The ultimate goal in tissue engineering is to help provide improved treatment options for patients suffering from debilitating conditions such as OA. Further investigations in developing functional cartilage replacement tissues using autologous chondrocytes will bring us a step closer to improving the quality of life for millions of OA patients worldwide.
Resumo:
In Australia, railway systems play a vital role in transporting the sugarcane crop from farms to mills. The sugarcane transport system is very complex and uses daily schedules, consisting of a set of locomotives runs, to satisfy the requirements of the mill and harvesters. The total cost of sugarcane transport operations is very high; over 35% of the total cost of sugarcane production in Australia is incurred in cane transport. Efficient schedules for sugarcane transport can reduce the cost and limit the negative effects that this system can have on the raw sugar production system. There are several benefits to formulating the train scheduling problem as a blocking parallel-machine job shop scheduling (BPMJSS) problem, namely to prevent two trains passing in one section at the same time; to keep the train activities (operations) in sequence during each run (trip) by applying precedence constraints; to pass the trains on one section in the correct order (priorities of passing trains) by applying disjunctive constraints; and, to ease passing trains by solving rail conflicts by applying blocking constraints and Parallel Machine Scheduling. Therefore, the sugarcane rail operations are formulated as BPMJSS problem. A mixed integer programming and constraint programming approaches are used to describe the BPMJSS problem. The model is solved by the integration of constraint programming, mixed integer programming and search techniques. The optimality performance is tested by Optimization Programming Language (OPL) and CPLEX software on small and large size instances based on specific criteria. A real life problem is used to verify and validate the approach. Constructive heuristics and new metaheuristics including simulated annealing and tabu search are proposed to solve this complex and NP-hard scheduling problem and produce a more efficient scheduling system. Innovative hybrid and hyper metaheuristic techniques are developed and coded using C# language to improve the solutions quality and CPU time. Hybrid techniques depend on integrating heuristic and metaheuristic techniques consecutively, while hyper techniques are the complete integration between different metaheuristic techniques, heuristic techniques, or both.
Resumo:
Australian higher education institutions (HEIs) have entered a new phase of regulation and accreditation which includes performance-based funding relating to the participation and retention of students from social and cultural groups previously underrepresented in higher education. However, in addressing these priorities, it is critical that HEIs do not further disadvantage students from certain groups by identifying them for attention because of their social or cultural backgrounds, circumstances which are largely beyond the control of students. In response, many HEIs are focusing effort on university-wide approaches to enhancing the student experience because such approaches will enhance the engagement, success and retention of all students, and in doing so, particularly benefit those students who come from underrepresented groups. Measuring and benchmarking student experiences and engagement that arise from these efforts is well supported by extensive collections of student experience survey data. However no comparable instrument exists that measures the capability of institutions to influence and/or enhance student experiences where capability is an indication of how well an organisational process does what it is designed to do (Rosemann & de Bruin, 2005). This paper proposes that the concept of a maturity model (Marshall, 2010; Paulk, 1999) may be useful as a way of assessing the capability of HEIs to provide and implement student engagement, success and retention activities. We will describe the Student Engagement, Success and Retention Maturity Model (SESR-MM), (Clarke, Nelson & Stoodley, 2012; Nelson, Clarke & Stoodley, 2012) we are currently investigating. We will discuss if our research may address the current gap by facilitating the development of an SESR-MM instrument that aims (i) to enable institutions to assess the capability of their current student engagement and retention programs and strategies to influence and respond to student experiences within the institution; and (ii) to provide institutions with the opportunity to understand various practices across the sector with a view to further improving programs and practices relevant to their context. The first aim of our research is to extend the generational approach which has been useful in considering the evolutionary nature of the first year experience (FYE) (Wilson, 2009). Three generations have been identified and explored: First generation approaches that focus on co-curricular strategies (e.g. orientation and peer programs); Second generation approaches that focus on curriculum (e.g. pedagogy, curriculum design, and learning and teaching practice); and third generation approaches—also referred to as transition pedagogy—that focus on the production of an institution-wide integrated holistic intentional blend of curricular and co-curricular activities (Kift, Nelson & Clarke, 2010). The second aim of this research is to move beyond assessments of students’ experiences to focus on assessing institutional processes and their capability to influence student engagement. In essence, we propose to develop and use the maturity model concept to produce an instrument that will indicate the capability of HEIs to manage and improve student engagement, success and retention programs and strategies. References Australian Council for Educational Research. (n.d.). Australasian Survey of Student Engagement. Retrieved from http://www.acer.edu.au/research/ausse/background Clarke, J., Nelson, K., & Stoodley, I. (2012, July). The Maturity Model concept as framework for assessing the capability of higher education institutions to address student engagement, success and retention: New horizon or false dawn? A Nuts & Bolts presentation at the 15th International Conference on the First Year in Higher Education, “New Horizons,” Brisbane, Australia. Kift, S., Nelson, K., & Clarke, J. (2010) Transition pedagogy - a third generation approach to FYE: A case study of policy and practice for the higher education sector. The International Journal of the First Year in Higher Education, 1(1), pp. 1-20. Department of Education, Employment and Workplace Relations. (n.d.). The University Experience Survey. Advancing quality in higher education information sheet. Retrieved from http://www.deewr.gov.au/HigherEducation/Policy/Documents/University_Experience_Survey.pdf Marshall, S. (2010). A quality framework for continuous improvement of e-Learning: The e-Learning Maturity Model. Journal of Distance Education, 24(1), 143-166. Nelson, K., Clarke, J., & Stoodley, I. (2012). An exploration of the Maturity Model concept as a vehicle for higher education institutions to assess their capability to address student engagement. A work in progress. Submitted for publication. Paulk, M. (1999). Using the Software CMM with good judgment, ASQ Software Quality Professional, 1(3), 19-29. Wilson, K. (2009, June–July). The impact of institutional, programmatic and personal interventions on an effective and sustainable first-year student experience. Keynote address presented at the 12th Pacific Rim First Year in Higher Education Conference, “Preparing for Tomorrow Today: The First Year as Foundation,” Townsville, Australia. Retrieved from http://www.fyhe.com.au/past_papers/papers09/ppts/Keithia_Wilson_paper.pdf
Resumo:
This paper offers insight into the development of a PhD in advertising art direction. For over half a century art directors within the advertising industry have been adapting to the changes occurring in media, culture and the corporate sector, toward enhancing professional performance and competitiveness. These professionals seldom offer explicit justification about the role images play in effective communication. It is uncertain how this situation affects advertising performance, because advertising has, nevertheless, evolved in parallel to this as an industry able to fabricate new opportunities for itself. However, uncertainties in the formalization of art direction knowledge restrict the possibilities of knowledge transfer in higher education. The theoretical knowledge supporting advertising art direction has been adapted spontaneously from disciplines that rarely focus on specific aspects related to the production of advertising content, like, for example: marketing communication, design, visual communication, or visual art. Meanwhile, in scholarly research, vast empirical knowledge has been generated about advertising images, but often with limited insight into production expertise. Because art direction is understood as an industry practice and not as an academic discipline, an art direction perspective in scholarly contributions is rare. Scholarly research that is relevant to art direction seldom offers viewpoints to help understand how it is that research outputs may specifically contribute to art direction practices. There is a need to formally understanding the knowledge underlying art direction and using it to explore models for visual analysis and knowledge transfer in higher education. This paper provides insight into the development of a thesis that explored this need. The PhD thesis to which this paper refers is Strategic Aesthetics in Advertising Campaigns: Implications for Art Direction Education.
Resumo:
Preventive Maintenance (PM) is often applied to improve the reliability of production lines. A Split System Approach (SSA) based methodology is presented to assist in making optimal PM decisions for serial production lines. The methodology treats a production line as a complex series system with multiple (imperfect) PM actions over multiple intervals. The conditional and overall reliability of the entire production line over these multiple PM intervals are hierarchically calculated using SSA, and provide a foundation for cost analysis. Both risk-related cost and maintenance-related cost are factored into the methodology as either deterministic or random variables. This SSA based methodology enables Asset Management (AM) decisions to be optimised considering a variety of factors including failure probability, failure cost, maintenance cost, PM performance, and the type of PM strategy. The application of this new methodology and an evaluation of the effects of these factors on PM decisions are demonstrated using an example. The results of this work show that the performance of a PM strategy can be measured by its Total Expected Cost Index (TECI). The optimal PM interval is dependent on TECI, PM performance and types of PM strategies. These factors are interrelated. Generally, it was found that a trade-off between reliability and the number of PM actions needs to be made so that one can minimise Total Expected Cost (TEC) for asset maintenance.