409 resultados para Dunkl-Bessel Transform
Resumo:
There has been a renaissance in Australian genre cinema in recent years. Indeed, not since the 1980s have Australian genre movies across action, adventure, horror, and science-fiction among others, experienced such prominence within production, policy discourse, and industry debate. Genre movies, typically associated with commercial filmmaking and entertainment, have been identified as a strategy to improve the box-office performance of Australian feature films and to attract larger audiences. Much of this conversation has revolved around the question of whether or not genre can deliver on these high expectations and transform the unpredictable local film industry into a popular and profitable commercial production sector. However, this debate for the most part has been disconnected from analysis of Australia’s genre movie heritage in terms of their position within Australian cinema and their reception with domestic audiences, and how this correlates to contemporary trends. As this chapter argues, genre production is not a silver bullet which will single handedly improve the Australian feature film industry’s commercial performance. Genre movies have occupied, and continue to occupy, a difficult position within Australian cinema and face numerous challenges in terms of reception with national audiences, limited production scale and enterprise structures, and ongoing tensions between culture and commerce.
Resumo:
Successful organizational transformation typically requires transformed leadership; that is, fundamental changes in the implicit leadership schema that underpin observed organizational leadership practice. The purpose of this study is to elaborate leadership schema change theory by investigating a case study in which the CEO of a public infrastructure organization sought to transform traditional organizational leadership to facilitate wider organization transformation. Data were generated through focus groups and semi-structured interviews at four points over a three-year period. Our findings suggest that (a) change leader initiatives do not necessarily activate the cognitive processing required to achieve leadership schema change, (b) collective schema change, defined in terms of the system of beliefs and values underlying the new leading-managing schema did not occur, however, (c) sub-schema change did occur. The research contributes to existing literature on implicit leadership schema change in three main ways. First, we provide a schema change framework to guide current and future research on schema change. Second, we highlight the role that both change leader initiatives and individual and social processing play in schema change. Finally, we stress the role of teleological processes in leadership schema change.
Resumo:
The complex design process of airport terminal needs to support a wide range of changes in operational facilities for both usual and unusual/emergency events. Process model describes how activities within a process are connected and also states logical information flow of the various activities. The traditional design process overlooks the necessity of information flow from the process model to the actual building design, which needs to be considered as a integral part of building design. The current research introduced a generic method to obtain design related information from process model to incorporate with the design process. Appropriate integration of the process model prior to the design process uncovers the relationship exist between spaces and their relevant functions, which could be missed in the traditional design approach. The current paper examines the available Business Process Model (BPM) and generates modified Business Process Model(mBPM) of check-in facilities of Brisbane International airport. The information adopted from mBPM then transform into possible physical layout utilizing graph theory.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
The commercialization of Chinese media has taken place over the past two decades; it has become a significant force since 2001 when China joined the World Trade Organisation. With demand for original content increasing and China contemplating a cultural trade deficit in media content, there is much discussion of agglomeration and clustering. Beijing, as the national media centre of China, witnesses a process of media agglomeration while bearing the problem of cultural export during the media commercialization. Michael Curtin‟s idea of media capital, which absorbs media resources and personnel and exports media products transnationally, provides a dynamic perspective of understanding media agglomeration and dispersion under different political social and cultural circumstances. Hence the question whether Beijing is going to transform into a transnational media capital is worth studying, in order to observe and comprehend China‟s media industry in transition. Drawing on Michael Curtin‟s three media capital trajectories, the paper interprets tensions and challenges generated in the process of media industry agglomeration and growth in Beijing. Emphasis is placed on the third trajectory, socio-cultural variation.
Resumo:
The increasing popularity of video consumption from mobile devices requires an effective video coding strategy. To overcome diverse communication networks, video services often need to maintain sustainable quality when the available bandwidth is limited. One of the strategy for a visually-optimised video adaptation is by implementing a region-of-interest (ROI) based scalability, whereby important regions can be encoded at a higher quality while maintaining sufficient quality for the rest of the frame. The result is an improved perceived quality at the same bit rate as normal encoding, which is particularly obvious at the range of lower bit rate. However, because of the difficulties of predicting region-of-interest (ROI) accurately, there is a limited research and development of ROI-based video coding for general videos. In this paper, the phase spectrum quaternion of Fourier Transform (PQFT) method is adopted to determine the ROI. To improve the results of ROI detection, the saliency map from the PQFT is augmented with maps created from high level knowledge of factors that are known to attract human attention. Hence, maps that locate faces and emphasise the centre of the screen are used in combination with the saliency map to determine the ROI. The contribution of this paper lies on the automatic ROI detection technique for coding a low bit rate videos which include the ROI prioritisation technique to give different level of encoding qualities for multiple ROIs, and the evaluation of the proposed automatic ROI detection that is shown to have a close performance to human ROI, based on the eye fixation data.
Resumo:
The effect of resource management on the building design process directly influences the development cycle time and success of construction projects. This paper presents the information constraint net (ICN) to represent the complex information constraint relations among design activities involved in the building design process. An algorithm is developed to transform the information constraints throughout the ICN into a Petri net model. A resource management model is developed using the ICN to simulate and optimize resource allocation in the design process. An example is provided to justify the proposed model through a simulation analysis of the CPN Tools platform in the detailed structural design. The result demonstrates that the proposed approach can obtain the resource management and optimization needed for shortening the development cycle and optimal allocation of resources.
Resumo:
Despite the common use of the term reflection in higher education assessment tasks, learners are not often taught how to communicate their disciplinary knowledge through reflection. This paper argues that students can and should be taught how to reflect in deep and transformative ways. It highlights the reflexive pedagogical balancing act of attending to different levels of reflection as a way to stimulate focused, thoughtful and reasoned reflections that show evidence of new ways of thinking and doing. The paper uses data from a current project to illustrate the effects of focusing on particular levels of reflection in the pedagogical strategies used, and argues that while the goal of academic or professional reflection is generally to move students to the highest level of reflection to transform their learning/practice, unless higher education teachers attend to every level of reflection, there are specific, observable gaps in the reflections that students produce.
Resumo:
Disengagement of students in science and the scientific literacy of young adults are interrelated international concerns. One way to address these concerns is to engage students imaginatively in activities designed to improve their scientific literacy. Our ongoing program of research has focused on the effects of a sequence of activities that require students to transform scientific information on important issues for their communities from government websites into narrative text suitable for a lay reader. These hybridized stories we call BioStories. Students upload their stories for peer review to a dedicated website. Peer reviews are intended to help students refine their stories. Reviewing BioStories also gives students access to a wider range of scientific topics and writing styles. We have conducted separate studies with students from Grade 6, Grade 9 and Grade 12, involving case study and quasi-experimental designs. The results from the 6th grade study support the argument that writing the sequence of stories helped the students become more familiar with the scientific issue, develop a deeper understanding of related biological concepts, and improve their interest in science. Unlike the Grade 6 study, it was not possible to include a control group for the study conducted across eight 9th grade classes. Nevertheless, these results suggest that hybridized writing developed more positive attitudes toward science and science learning, particularly in terms of the students’ interest and enjoyment. In the most recent case study with Grade 12 students, we found that pride, strength, determination, interest and alertness were among the positive emotions most strongly elicited by the writing project. Furthermore, the students expressed enhanced feelings of self-efficacy in successfully writing hybridized scientific narratives in science. In this chapter, we describe the pedagogy of hybridized writing in science, overview the evidence to support this approach, and identify future developments.
Resumo:
In this article, we report transgene-derived resistance in maize to the severe pathogen maize streak virus (MSV). The mutated MSV replication-associated protein gene that was used to transform maize showed stable expression to the fourth generation. Transgenic T 2 and T 3 plants displayed a significant delay in symptom development, a decrease in symptom severity and higher survival rates than non-transgenic plants after MSV challenge, as did a transgenic hybrid made by crossing T 2 Hi-II with the widely grown, commercial, highly MSV-susceptible, white maize genotype WM3. To the best of our knowledge, this is the first maize to be developed with transgenic MSV resistance and the first all-African-produced genetically modified crop plant. © 2007 The Authors.
Resumo:
Businesses will learn how design integration can increase growth and productivity, and gain a sustainable competitive advantage. This hands on two-day workshop will demystify design thinking and introduce you to both theory and practice for the latest, world-class design integration methods. Design integration will transform your business and boost resilience in the face of current global, social and economic challenges.
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints,including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing a significant proportion of invalid matches. The accuracy of matching in the vicinity of edges is also improved.
Resumo:
A fundamental problem faced by stereo vision algorithms is that of determining correspondences between two images which comprise a stereo pair. This paper presents work towards the development of a new matching algorithm, based on the rank transform. This algorithm makes use of both area-based and edge-based information, and is therefore referred to as a hybrid algorithm. In addition, this algorithm uses a number of matching constraints, including the novel rank constraint. Results obtained using a number of test pairs show that the matching algorithm is capable of removing most invalid matches. The accuracy of matching in the vicinity of edges is also improved.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. The objective is to produce a stereo vision sensor suited to close-range scenes consisting primarily of rocks. This sensor should be able to produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this investigation. A number of area based matching metrics have been implemented, including the SAD, SSD, NCC, and their zero-meaned versions. The NCC and the zero meaned SAD and SSD were found to produce the disparity maps with the highest proportion of valid matches. The plain SAD and SSD were the least computationally expensive, due to all their operations taking place in integer arithmetic, however, they were extremely sensitive to radiometric distortion. Non-parametric techniques for matching, in particular, the rank and the census transform, have also been investigated. The rank and census transforms were found to be robust with respect to radiometric distortion, as well as being able to produce disparity maps with a high proportion of valid matches. An additional advantage of both the rank and the census transform is their amenability to fast hardware implementation.
Resumo:
The mining environment presents a challenging prospect for stereo vision. Our objective is to produce a stereo vision sensor suited to close-range scenes consisting mostly of rocks. This sensor should produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this application. This paper compares a number of stereo matching algorithms in terms of robustness and suitability to fast implementation. These include traditional area-based algorithms, and algorithms based on non-parametric transforms, notably the rank and census transforms. Our experimental results show that the rank and census transforms are robust with respect to radiometric distortion and introduce less computational complexity than conventional area-based matching techniques.