242 resultados para Filmic approach methods
Resumo:
Mobile robots are widely used in many industrial fields. Research on path planning for mobile robots is one of the most important aspects in mobile robots research. Path planning for a mobile robot is to find a collision-free route, through the robot’s environment with obstacles, from a specified start location to a desired goal destination while satisfying certain optimization criteria. Most of the existing path planning methods, such as the visibility graph, the cell decomposition, and the potential field are designed with the focus on static environments, in which there are only stationary obstacles. However, in practical systems such as Marine Science Research, Robots in Mining Industry, and RoboCup games, robots usually face dynamic environments, in which both moving and stationary obstacles exist. Because of the complexity of the dynamic environments, research on path planning in the environments with dynamic obstacles is limited. Limited numbers of papers have been published in this area in comparison with hundreds of reports on path planning in stationary environments in the open literature. Recently, a genetic algorithm based approach has been introduced to plan the optimal path for a mobile robot in a dynamic environment with moving obstacles. However, with the increase of the number of the obstacles in the environment, and the changes of the moving speed and direction of the robot and obstacles, the size of the problem to be solved increases sharply. Consequently, the performance of the genetic algorithm based approach deteriorates significantly. This motivates the research of this work. This research develops and implements a simulated annealing algorithm based approach to find the optimal path for a mobile robot in a dynamic environment with moving obstacles. The simulated annealing algorithm is an optimization algorithm similar to the genetic algorithm in principle. However, our investigation and simulations have indicated that the simulated annealing algorithm based approach is simpler and easier to implement. Its performance is also shown to be superior to that of the genetic algorithm based approach in both online and offline processing times as well as in obtaining the optimal solution for path planning of the robot in the dynamic environment. The first step of many path planning methods is to search an initial feasible path for the robot. A commonly used method for searching the initial path is to randomly pick up some vertices of the obstacles in the search space. This is time consuming in both static and dynamic path planning, and has an important impact on the efficiency of the dynamic path planning. This research proposes a heuristic method to search the feasible initial path efficiently. Then, the heuristic method is incorporated into the proposed simulated annealing algorithm based approach for dynamic robot path planning. Simulation experiments have shown that with the incorporation of the heuristic method, the developed simulated annealing algorithm based approach requires much shorter processing time to get the optimal solutions in the dynamic path planning problem. Furthermore, the quality of the solution, as characterized by the length of the planned path, is also improved with the incorporated heuristic method in the simulated annealing based approach for both online and offline path planning.
Resumo:
This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
Resumo:
Migraine is a painful disorder for which the etiology remains obscure. Diagnosis is largely based on International Headache Society criteria. However, no feature occurs in all patients who meet these criteria, and no single symptom is required for diagnosis. Consequently, this definition may not accurately reflect the phenotypic heterogeneity or genetic basis of the disorder. Such phenotypic uncertainty is typical for complex genetic disorders and has encouraged interest in multivariate statistical methods for classifying disease phenotypes. We applied three popular statistical phenotyping methods—latent class analysis, grade of membership and grade of membership “fuzzy” clustering (Fanny)—to migraine symptom data, and compared heritability and genome-wide linkage results obtained using each approach. Our results demonstrate that different methodologies produce different clustering structures and non-negligible differences in subsequent analyses. We therefore urge caution in the use of any single approach and suggest that multiple phenotyping methods be used.
Resumo:
This study, in its exploration of the attached play scripts and their method of development, evaluates the forms, strategies, and methods of an organised model of formalised playwriting. Through the examination, reflection and reaction to a perceived crisis in playwriting in the Australian theatre sector, the notion of Industrial Playwriting is arrived at: a practice whereby plays are designed and constructed, and where the process of writing becomes central to the efficient creation of new work and the improvement of the writer’s skill and knowledge base. Using a practice-led methodology and action research the study examines a system of play construction appropriate to and addressing the challenges of the contemporary Australian theatre sector. Specifically, using the action research methodology known as design-based research a conceptual framework was constructed to form the basis of the notion of Industrial Playwriting. From this two plays were constructed using a case study method and the process recorded and used to create a practical, step-by-step system of Industrial Playwriting. In the creative practice of manufacturing a single authored play, and then a group-devised play, Industrial Playwriting was tested and found to also offer a valid alternative approach to playwriting in the training of new and even emerging playwrights. Finally, it offered insight into how Industrial Playwriting could be used to greatly facilitate theatre companies’ ongoing need to have access to new writers and new Australian works, and how it might form the basis of a cost effective writer development model. This study of the methods of formalised writing as a means to confront some of the challenges of the Australian theatre sector, the practice of playwriting and the history associated with it, makes an original and important contribution to contemporary playwriting practice.
Resumo:
One of the most important tasks as an industrial designer is to evoke specific affective responses via the creation of their designed products. This paper describes an investigation of visceral hedonic rhetoric through the study of interactive products. This research lays the foundation for this work by discussing the scope, significance and limitations of currently available research in the areas of visceral design, consumer hedonics and product rhetoric. Understanding why consumers respond to certain visceral hedonic rhetoric stimulus and what those stimuli are will provide further understanding into the field of emotional design. The study examines visceral hedonic responses given by consumers to three interactive products including mobile telephones, USB memory sticks and MP3 players. The methods used in this study will be discussed in further detail in this paper.
Resumo:
Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated.
Resumo:
Background: There is a sound rationale for the population-based approach to falls injury prevention but there is currently insufficient evidence to advise governments and communities on how they can use population-based strategies to achieve desired reductions in the burden of falls-related injury.---------- Aim: To quantify the effectiveness of a streamlined (and thus potentially sustainable and cost-effective), population-based, multi-factorial falls injury prevention program for people over 60 years of age.---------- Methods: Population-based falls-prevention interventions were conducted at two geographically-defined and separate Australian sites: Wide Bay, Queensland, and Northern Rivers, NSW. Changes in the prevalence of key risk factors and changes in rates of injury outcomes within each community were compared before and after program implementation and changes in rates of injury outcomes in each community were also compared with the rates in their respective States.---------- Results: The interventions in neither community substantially decreased the rate of falls-related injury among people aged 60 years or older, although there was some evidence of reductions in occurrence of multiple falls reported by women. In addition, there was some indication of improvements in fall-related risk factors, but the magnitudes were generally modest.---------- Conclusion: The evidence suggests that low intensity population-based falls prevention programs may not be as effective as those are intensively implemented.
Resumo:
Background: Work-related injuries in Australia are estimated to cost around $57.5 billion annually, however there are currently insufficient surveillance data available to support an evidence-based public health response. Emergency departments (ED) in Australia are a potential source of information on work-related injuries though most ED’s do not have an ‘Activity Code’ to identify work-related cases with information about the presenting problem recorded in a short free text field. This study compared methods for interrogating text fields for identifying work-related injuries presenting at emergency departments to inform approaches to surveillance of work-related injury.---------- Methods: Three approaches were used to interrogate an injury description text field to classify cases as work-related: keyword search, index search, and content analytic text mining. Sensitivity and specificity were examined by comparing cases flagged by each approach to cases coded with an Activity code during triage. Methods to improve the sensitivity and/or specificity of each approach were explored by adjusting the classification techniques within each broad approach.---------- Results: The basic keyword search detected 58% of cases (Specificity 0.99), an index search detected 62% of cases (Specificity 0.87), and the content analytic text mining (using adjusted probabilities) approach detected 77% of cases (Specificity 0.95).---------- Conclusions The findings of this study provide strong support for continued development of text searching methods to obtain information from routine emergency department data, to improve the capacity for comprehensive injury surveillance.
Resumo:
We study the suggestion that Markov switching (MS) models should be used to determine cyclical turning points. A Kalman filter approximation is used to derive the dating rules implicit in such models. We compare these with dating rules in an algorithm that provides a good approximation to the chronology determined by the NBER. We find that there is very little that is attractive in the MS approach when compared with this algorithm. The most important difference relates to robustness. The MS approach depends on the validity of that statistical model. Our approach is valid in a wider range of circumstances.
Resumo:
Jonzi D, one of the leading Hip Hop voices in the UK, creates contemporary theatrical works that merge dance, street art, original scored music and contemporary rap poetry, to create theatrical events that expand a thriving sense of a Hip Hop nation with citizens in the UK, throughout southern Africa and the rest of the world. In recent years Hip Hop has evolved as a performance genre in and of itself that not only borrows from other forms but vitally now contributes back to the body of contemporary practice in the performing arts. As part of this work Jonzi’s company Jonzi D Productions is committed to creating and touring original Hip Hop theatre that promotes the continuing development and awareness of a nation with its own language, culture and currency that exists without borders. Through the deployment of a universal voice from the local streets of Johannesburg and the East End of London, Jonzi D creates a form of highly energized performance that elevates Hip Hop as great democratiser between the highly developed global and under resourced local in the world. It is the staging of this democratised and technologised future (and present), that poses the greatest challenge for the scenographer working with Jonzi and his company, and the associated deprogramming and translation of the artists particular filmic vision to the stage, that this discussion will explore. This paper interrogates not only how a scenographic strategy can support the existence of this work but also how the scenographer as outsider can enter and influence this nation.
Resumo:
In teaching introductory economics there has been a tendency to put a lot of emphasis on imparting abstract models and technical skills to students (Stilwell, 2005; Voss, Blais, Greens, & Ahwesh, 1986). This model building approach has the merit of preparing the grounding for students 10 pursue further studies in economics. However, in a business degree with only a small proportion of students majoring in economics, such an approach tend to alienate the majority of students transiting from high school in to university. Surveys in Europe and Australia found that students complained about the lack of relevance of economics courses to the real world and the over-reliance of abstract mathematical modelling (Kirman, 2001; Lewis and Norris, 1997; Siegfried & Round, 2000). BSB112 Economics 1 is one of the eight faculty core units in the Faculty of Business at QUT, with over 1000 students in each semester. In semester I 2008, a new approach to teaching this unit was designed aiming to achieve three inter-related objectives: (1) to provide business students with a first insight into economic thinking and language, (2) to integrate economic analysis with current Australian social, environmental and political issues, and (3) to cater for students with a wide range of academic needs. Strategies used to achieve these objectives included writing up a new text which departs from traditional economics textbooks in important ways, integrating students' cultures in teaching and learning activities, and devising a new assessment format to encourage development of research skills and applications rather than reproduction of factual knowledge. This paper will document the strategies used in this teaching innovation, present quantitative and qualitative evidence to evaluate this new approach and suggest ways of further improvement.
Resumo:
Presents a unified and systematic assessment of ten position control strategies for a hydraulic servo system with single-ended cylinder driven by a proportional directional control valve. We aim at identifying those methods that achieve better tracking, have a low sensitivity to system uncertainties, and offer a good balance between development effort and end results. A formal approach for solving this problem relies on several practical metrics, which is introduced herein. Their choice is important, as the comparison results between controllers can vary significantly, depending on the selected criterion. Apart from the quantitative assessment, we also raise aspects which are difficult to quantify, but which must stay in attention when considering the position control problem for this class of hydraulic servo systems.
Resumo:
The large deformation analysis is one of major challenges in numerical modelling and simulation of metal forming. Because no mesh is used, the meshfree methods show good potential for the large deformation analysis. In this paper, a local meshfree formulation, based on the local weak-forms and the updated Lagrangian (UL) approach, is developed for the large deformation analysis. To fully employ the advantages of meshfree methods, a simple and effective adaptive technique is proposed, and this procedure is much easier than the re-meshing in FEM. Numerical examples of large deformation analysis are presented to demonstrate the effectiveness of the newly developed nonlinear meshfree approach. It has been found that the developed meshfree technique provides a superior performance to the conventional FEM in dealing with large deformation problems for metal forming.
Resumo:
Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.