29 resultados para GOAL PROGRAMMING APPROACH

em Deakin Research Online - Australia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides a procedure to address all three phases of the design for cellular manufacturing namely parts/machines grouping, intra-cell and inter-cell layout designs concurrently. It provides a platform to investigate the impact of the cell formation method on intracell and inter-cell layout designs and vice versa by generating multiple efficient layout designs for different cell partitioning strategies. This approach enables the decision maker to have wider choices with regard to the different number of cells and to assess various criteria such as travelling cost, duplication of machines, space requirement against each alternative. The performance of the model is demonstrated by applying it to an example selected from literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two questions emerge from the literature concerning the perceptual-motor processes underlying the visual regulation of step length. The first concerns the effects of velocity on the onset of visual control (VCO), when visual regulation of step length begins during goal-directed locomotion. The second concerns the effects of different obstacles such as a target or raised surface on step length regulation. In two separate experiments, participants (Experiment 1 & 2: n=12, 6 female, 6 male) walked, jogged, or sprinted towards an obstacle along a 10 m walkway, consisting of two marker-strips with alternating black and white 0.50 m markings. Each experiment consisted of three targeting or obstacle tasks with the requirement to both negotiate and continue moving (run-through) through the target. Five trials were conducted for each task and approach speed, with trials block randomised between the six participants of each gender. One 50 Hz video camera panned and filmed each trial from an elevated position, adjacent to the walkway. Video footage was digitized to deduce the gait characteristics. Results for the targeting tasks indicate a linear relationship between approach velocity and accuracy of final foot placement (r=0.89). When foot placement was highly constrained by the obstacle step length shortened during the entire approach. VCO was found to occur at an earlier tau-margin for lower approach velocities for both experiments, indicating that the optical variable ‘tau' is affected by approach velocity. A three-phase kinematic profile was found for all tasks, except for the take-off board condition when sprinting. Further research is needed to determine whether this velocity affect on VCO is due to ‘whole-body' approach velocity or whether it is a function of the differences between gait modes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a distinctive approach to the sexually transmissible infections (STI) clinical consultation: 'the guided reflection approach'. The authors coined this term and identified the guided reflection approach through analysis of 22 in-depth interviews with practitioners who provide care for people with STI, and 34 people who had attended a healthcare facility in Australia for screening or treatment of an STI. A grounded theory method was used to collect and analyse this information. The data revealed when the STI consultation is conducted using the principles characterized by the guided reflection approach creates contexts for sexual empowerment that have the potential to effectively assist people to gain autonomy for safe sex. Routinely, most of the practitioners in this study were shown to direct the STI consultation towards risk behaviours and practices and prevention of transmission, with minimal intervention. However, this study shows that if clinical interaction is to make a difference to the patient's autonomy for sexual behaviour, two changes will be required. First, practitioners need to adopt the goal of assisting patients to attain levels of autonomy, and second, practitioners require education to assist them to develop the interactive skills needed to engage patients in dialogue and reflection about sexual behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes two integer programming models and their GA-based solutions for optimal concept learning. The models are built to obtain the optimal concept description in the form of propositional logic formulas from examples based on completeness, consistency and simplicity. The simplicity of the propositional rules is selected as the objective function of the integer programming models, and the completeness and consistency of the concept are used as the constraints. Considering the real-world problems that certain level of noise is contained in data set, the constraints in model 11 are slacked by adding slack-variables. To solve the integer programming models, genetic algorithm is employed to search the global solution space. We call our approach IP-AE. Its effectiveness is verified by comparing the experimental results with other well- known concept learning algorithms: AQ15 and C4.5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The object-oriented finite element method (OOFEM) has attracted the attention of many researchers. Compared with the traditional finite element method, OOFEM software has the advantages of maintenance and reuse. Moreover, it is easier to expand the architecture to a distributed one. In this paper, we introduce a distributed architecture of a object-oriented finite element preprocessor. A comparison between the distributed system and the centralised system shows that the former, presented in the paper, greatly improves the performance of mesh generation. Other finite element analysis modules could be expanded according to this architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the repetitive and lengthy nature, automatic content-based summarization is essential to extract a more compact and interesting representation of sport video. State-of-the art approaches have confirmed that high-level semantic in sport video can be detected based on the occurrences of specific audio and visual features (also known as cinematic). However, most of them still rely heavily on manual investigation to construct the algorithms for highlight detection. Thus, the primary aim of this paper is to demonstrate how the statistics of cinematic features within play-break sequences can be used to less-subjectively construct highlight classification rules. To verify the effectiveness of our algorithms, we will present some experimental results using six AFL (Australian Football League) matches from different broadcasters. At this stage, we have successfully classified each play-break sequence into: goal, behind, mark, tackle, and non-highlight. These events are chosen since they are commonly used for broadcasted AFL highlights. The proposed algorithms have also been tested successfully with soccer video.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Should computer programming be taught within schools of architecture?

Incorporating even low-level computer programming within architectural education curricula is a matter of debate but we have found it useful to do so for two reasons: as an introduction or at least a consolidation of the realm of descriptive geometry and in providing an environment for experimenting in morphological time-based change.

Mathematics and descriptive geometry formed a significant proportion of architectural education until the end of the 19th century. This proportion has declined in contemporary curricula, possibly at some cost for despite major advances in automated manufacture, Cartesian measurement is still the principal ‘language’ with which to describe building for construction purposes. When computer programming is used as a platform for instruction in logic and spatial representation, the waning interest in mathematics as a basis for spatial description can be readdressed using a left-field approach. Students gain insights into topology, Cartesian space and morphology through programmatic form finding, as opposed to through direct manipulation.

In this context, it matters to the architect-programmer how the program operates more than what it does. This paper describes an assignment where students are given a figurative conceptual space comprising the three Cartesian axes with a cube at its centre. Six Phileban solids mark the Cartesian axial limits to the space. Any point in this space represents a hybrid of one, two or three transformations from the central cube towards the various Phileban solids. Students are asked to predict the topological and morphological outcomes of the operations. Through programming, they become aware of morphogenesis and hybridisation. Here we articulate the hypothesis above and report on the outcome from a student group, whose work reveals wider learning opportunities for architecture students in computer programming than conventionally assumed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose that there is a direct relationship between risk management and goods (or goal) promotion in the treatment of sexual offenders. We argue that the causal conditions required to promote specific goods are likely, in turn, to eliminate or modify dynamic risk factors (i.e., criminogenic needs). First, the concepts of risk and goals are briefly discussed and their important dimensions clarified. Second, the relationship between criminogenic needs and goals are analyzed in depth. Third, we further clarify our arguments by focusing on four classes of criminogenic needs recently identified in the sexual offending literature: sexual self-regulation, offense supportive cognitions, level of interpersonal functioning, and general self-management problems. Finally, we conclude the paper with some suggestions for future research and treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overarching goal of this dissertation was to evaluate the contextual components of instructional strategies for the acquisition of complex programming concepts. A meta-knowledge processing model is proposed, on the basis of the research findings, thereby facilitating the selection of media treatment for electronic courseware. When implemented, this model extends the work of Smith (1998), as a front-end methodology, for his glass-box interpreter called Bradman, for teaching novice programmers. Technology now provides the means to produce individualized instructional packages with relative ease. Multimedia and Web courseware development accentuate a highly graphical (or visual) approach to instructional formats. Typically, little consideration is given to the effectiveness of screen-based visual stimuli, and curiously, students are expected to be visually literate, despite the complexity of human-computer interaction. Visual literacy is much harder for some people to acquire than for others! (see Chapter Four: Conditions-of-the-Learner) An innovative research programme was devised to investigate the interactive effect of instructional strategies, enhanced with text-plus-textual metaphors or text-plus-graphical metaphors, and cognitive style, on the acquisition of a special category of abstract (process) programming concept. This type of concept was chosen to focus on the role of analogic knowledge involved in computer programming. The results are discussed within the context of the internal/external exchange process, drawing on Ritchey's (1980) concepts of within-item and between-item encoding elaborations. The methodology developed for the doctoral project integrates earlier research knowledge in a novel, interdisciplinary, conceptual framework, including: from instructional science in the USA, for the concept learning models; British cognitive psychology and human memory research, for defining the cognitive style construct; and Australian educational research, to provide the measurement tools for instructional outcomes. The experimental design consisted of a screening test to determine cognitive style, a pretest to determine prior domain knowledge in abstract programming knowledge elements, the instruction period, and a post-test to measure improved performance. This research design provides a three-level discovery process to articulate: 1) the fusion of strategic knowledge required by the novice learner for dealing with contexts within instructional strategies 2) acquisition of knowledge using measurable instructional outcome and learner characteristics 3) knowledge of the innate environmental factors which influence the instructional outcomes This research has successfully identified the interactive effect of instructional strategy, within an individual's cognitive style construct, in their acquisition of complex programming concepts. However, the significance of the three-level discovery process lies in the scope of the methodology to inform the design of a meta-knowledge processing model for instructional science. Firstly, the British cognitive style testing procedure, is a low cost, user friendly, computer application that effectively measures an individual's position on the two cognitive style continua (Riding & Cheema,1991). Secondly, the QUEST Interactive Test Analysis System (Izard,1995), allows for a probabilistic determination of an individual's knowledge level, relative to other participants, and relative to test-item difficulties. Test-items can be related to skill levels, and consequently, can be used by instructional scientists to measure knowledge acquisition. Finally, an Effect Size Analysis (Cohen,1977) allows for a direct comparison between treatment groups, giving a statistical measurement of how large an effect the independent variables have on the dependent outcomes. Combined with QUEST's hierarchical positioning of participants, this tool can assist in identifying preferred learning conditions for the evaluation of treatment groups. By combining these three assessment analysis tools into instructional research, a computerized learning shell, customised for individuals' cognitive constructs can be created (McKay & Garner,1999). While this approach has widespread application, individual researchers/trainers would nonetheless, need to validate with an extensive pilot study programme (McKay,1999a; McKay,1999b), the interactive effects within their specific learning domain. Furthermore, the instructional material does not need to be limited to a textual/graphical comparison, but could be applied to any two or more instructional treatments of any kind. For instance: a structured versus exploratory strategy. The possibilities and combinations are believed to be endless, provided the focus is maintained on linking of the front-end identification of cognitive style with an improved performance outcome. My in-depth analysis provides a better understanding of the interactive effects of the cognitive style construct and instructional format on the acquisition of abstract concepts, involving spatial relations and logical reasoning. In providing the basis for a meta-knowledge processing model, this research is expected to be of interest to educators, cognitive psychologists, communications engineers and computer scientists specialising in computer-human interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Any attempt to model an economy requires foundational assumptions about the relations between prices, values and the distribution of wealth. These assumptions exert a profound influence over the results of any model. Unfortunately, there are few areas in economics as vexed as the theory of value. I argue in this paper that the fundamental problem with past theories of value is that it is simply not possible to model the determination of value, the formation of prices and the distribution of income in a real economy with analytic mathematical models. All such attempts leave out crucial processes or make unrealistic assumptions which significantly affect the results. There have been two primary approaches to the theory of value. The first, associated with classical economists such as Ricardo and Marx were substance theories of value, which view value as a substance inherent in an object and which is conserved in exchange. For Marxists, the value of a commodity derives solely from the value of the labour power used to produce it - and therefore any profit is due to the exploitation of the workers. The labour theory of value has been discredited because of its assumption that labour was the only ‘factor’ that contributed to the creation of value, and because of its fundamentally circular argument. Neoclassical theorists argued that price was identical with value and was determined purely by the interaction of supply and demand. Value then, was completely subjective. Returns to labour (wages) and capital (profits) were determined solely by their marginal contribution to production, so that each factor received its just reward by definition. Problems with the neoclassical approach include assumptions concerning representative agents, perfect competition, perfect and costless information and contract enforcement, complete markets for credit and risk, aggregate production functions and infinite, smooth substitution between factors, distribution according to marginal products, firms always on the production possibility frontier and firms’ pricing decisions, ignoring money and credit, and perfectly rational agents with infinite computational capacity. Two critical areas include firstly, the underappreciated Sonnenschein-Mantel- Debreu results which showed that the foundational assumptions of the Walrasian general-equilibrium model imply arbitrary excess demand functions and therefore arbitrary equilibrium price sets. Secondly, in real economies, there is no equilibrium, only continuous change. Equilibrium is never reached because of constant changes in preferences and tastes; technological and organisational innovations; discoveries of new resources and new markets; inaccurate and evolving expectations of businesses, consumers, governments and speculators; changing demand for credit; the entry and exit of firms; the birth, learning, and death of citizens; changes in laws and government policies; imperfect information; generalized increasing returns to scale; random acts of impulse; weather and climate events; changes in disease patterns, and so on. The problem is not the use of mathematical modelling, but the kind of mathematical modelling used. Agent-based models (ABMs), objectoriented programming and greatly increased computer power however, are opening up a new frontier. Here a dynamic bargaining ABM is outlined as a basis for an alternative theory of value. A large but finite number of heterogeneous commodities and agents with differing degrees of market power are set in a spatial network. Returns to buyers and sellers are decided at each step in the value chain, and in each factor market, through the process of bargaining. Market power and its potential abuse against the poor and vulnerable are fundamental to how the bargaining dynamics play out. Ethics therefore lie at the very heart of economic analysis, the determination of prices and the distribution of wealth. The neoclassicals are right then that price is the enumeration of value at a particular time and place, but wrong to downplay the critical roles of bargaining, power and ethics in determining those same prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we address the spatial activity recognition problem with an algorithm based on Smith-Waterman (SW) local alignment. The proposed SW approach utilises dynamic programming with two dimensional spatial data to quantify sequence similarity. SW is well suited for spatial activity recognition as the approach is robust to noise and can accommodate gaps, resulting from tracking system errors. Unlike other approaches SW is able to locate and quantify activities embedded within extraneous spatial data. Through experimentation with a three class data set, we show that the proposed SW algorithm is capable of recognising accurately and inaccurately segmented spatial sequences. To benchmark the techniques classification performance we compare it to the discrete hidden markov model (HMM). Results show that SW exhibits higher accuracy than the HMM, and also maintains higher classification accuracy with smaller training set sizes. We also confirm the robust property of the SW approach via evaluation with sequences containing artificially introduced noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional teaching styles practiced at universities do not generally suit all students' learning styles. For a variety of reasons, students do not always engage in learning in the courses in which they are enrolled. New methods to create and deliver educational material are available, but these do not always improve learning outcomes. Acknowledging these truths and developing and delivering educational material that provides diverse ways for students to learn is a constant challenge. This study examines the use of video tutorials within a university environment in an attempt to provide a teaching model that is valuable to all students, and in particular to those students who are not engaging in learning. The results of a three-year study have demonstrated that the use of well-designed, assessment-focused, and readily available video tutorials have the potential to improve student satisfaction and grades by enabling and encouraging students to learn how they want, when they want, and at a pace that suits their needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The practice of solely relying on the human resources department in the selection process of external training providers has cast doubts and mistrust across other departments as to how trainers are sourced. There are no measurable criteria used by human resource personnel, since most decisions are based on intuitive experience and subjective market knowledge. The present problem focuses on outsourcing of private training programs that are partly government funded, which has been facing accountability challenges. Due to the unavailability of a scientific decision-making approach in this context, a 12-step algorithm is proposed and tested in a Japanese multinational company. The model allows the decision makers to revise their criteria expectations, in turn witnessing the change of the training providers' quota distribution. Finally, this multi-objective sensitivity analysis provides a forward-looking approach to training needs planning and aids decision makers in their sourcing strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The frontotemporal-orbitozygomatic (FTOZ) approach, also known as "the workhorse of skull base surgery," has captured the interest of many researchers throughout the years. Most of the studies published have focused on the surgical technique and the gained exposure. However, few studies have described reconstructive techniques or functional and cosmetic outcomes. The goal of this study was to describe the surgical reconstruction after the FTOZ approach and analyze the functional and cosmetic outcomes. Methods Seventy-five consecutive patients who had undergone FTOZ craniotomy for different reasons were selected. The same surgical (one-piece FTOZ) and reconstructive techniques were applied in all patients. The functional outcome was measured by complications related to the surgical approach: retro-orbital pain, exophthalmos, enophthalmos, ocular movement restriction, cranial nerve injuries, pseudomeningocele (PMC) and secondary surgeries required to attain a reconstructive closure. The cosmetic outcome was evaluated by analyzing the satisfaction of the patients and their families. Questionnaires were conducted later in the postoperative period. A statistical analysis of the data obtained from the charts and questions was performed. Results Of the 75 patients studied, 59 had no complications whatsoever. Ocular movement restriction was found in two patients (2.4 %). Cranial nerve injury was documented in seven patients (8.5 %). One patient (1.2 %) underwent surgical repair of a cerebrospinal fluid (CSF) leak from the initial surgery. Two patients (2.4 %) developed delayed postoperative pseudomenigocele. One patient (1.2 %) developed intraparenchymal hemorrhage (IPH). Full responses to the questionnaires were collected from 28 patients giving an overall response rate of 34 %. Overall, 22 patients (78.5 %) were satisfied with the cosmetic outcome of surgery. Conclusion The reconstruction after FTOZ approach is as important as the performance of the surgical technique. Attention to anatomical details and the stepwise reconstruction are a prerequisite to the successful preservation of function and cosmesis. In our series, the orbitozygomatic osteotomy did not increase surgical complications or alter cosmetic outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Making decision usually occurs in the state of being uncertain. These kinds of problems often expresses in a formula as optimization problems. It is desire for decision makers to find a solution for optimization problems. Typically, solving optimization problems in uncertain environment is difficult. This paper proposes a new hybrid intelligent algorithm to solve a kind of stochastic optimization i.e. dependent chance programming (DCP) model. In order to speed up the solution process, we used support vector machine regression (SVM regression) to approximate chance functions which is the probability of a sequence of uncertain event occurs based on the training data generated by the stochastic simulation. The proposed algorithm consists of three steps: (1) generate data to estimate the objective function, (2) utilize SVM regression to reveal a trend hidden in the data (3) apply genetic algorithm (GA) based on SVM regression to obtain an estimation for the chance function. Numerical example is presented to show the ability of algorithm in terms of time-consuming and precision.