679 resultados para Produce


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on efficient pairing implementation has focussed on reducing the loop length and on using high-degree twists. Existence of twists of degree larger than 2 is a very restrictive criterion but luckily constructions for pairing-friendly elliptic curves with such twists exist. In fact, Freeman, Scott and Teske showed in their overview paper that often the best known methods of constructing pairing-friendly elliptic curves over fields of large prime characteristic produce curves that admit twists of degree 3, 4 or 6. A few papers have presented explicit formulas for the doubling and the addition step in Miller’s algorithm, but the optimizations were all done for the Tate pairing with degree-2 twists, so the main usage of the high- degree twists remained incompatible with more efficient formulas. In this paper we present efficient formulas for curves with twists of degree 2, 3, 4 or 6. These formulas are significantly faster than their predecessors. We show how these faster formulas can be applied to Tate and ate pairing variants, thereby speeding up all practical suggestions for efficient pairing implementations over fields of large characteristic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a novel technique to directly optimise the Figure of Merit (FOM) for phonetic spoken term detection. The FOM is a popular measure of sTD accuracy, making it an ideal candiate for use as an objective function. A simple linear model is introduced to transform the phone log-posterior probabilities output by a phe classifier to produce enhanced log-posterior features that are more suitable for the STD task. Direct optimisation of the FOM is then performed by training the parameters of this model using a non-linear gradient descent algorithm. Substantial FOM improvements of 11% relative are achieved on held-out evaluation data, demonstrating the generalisability of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high moisture content of mill mud (typically 75–80% for Australian factories) results in high transportation costs for the redistribution of mud onto cane farms. The high transportation cost relative to the nutrient value of the mill mud results in many milling companies subsidising the cost of this recycle to ensure a wide distribution across the cane supply area. An average mill would generate about 100 000 t of mud (at 75% moisture) in a crushing season. The development of mud processing facilities that will produce a low moisture mud that can be effectively incorporated into cane land with existing or modified spreading equipment will improve the cost efficiency of mud redistribution to farms; provide an economical fertiliser alternative to more farms in the supply area; and reduce the potential for adverse environmental impacts from farms. A research investigation assessing solid bowl decanter centrifuges to produce low moisture mud with low residual pol was undertaken and the results compared to the performance of existing rotary vacuum filters in factory trials. The decanters were operated on filter mud feed in parallel with the rotary vacuum filters to allow comparisons of performance. Samples of feed, mud product and filtrate were analysed to provide performance indicators. The decanter centrifuge could produce mud cakes with very low moistures and residual pol levels. Spreading trials in cane fields indicated that the dry cake could be spread easily by standard mud trucks and by trucks designed specifically to spread fertiliser.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a growing body of literature within social and cultural geography that explores notions of place, space, culture, race and identity. When health services in rural communities are explored using these notions, it can lead to multiple ways of understanding the cultural meanings inscribed within health services and how they can be embedded with an array of politics. For example, health services can often reflect the symbolic place that each individual holds within that rural community. Through the use of a rural health service case study, this paper will demonstrate how the physical sites and appearances of health services can act as social texts that convey messages of belonging and welcome, or exclusion and domination. They can also produce and reproduce power and control relations. In this way, they can influence the ways that Aboriginal people engage in health service environments – either as places where Aboriginal people feel welcome, comfortable, secure and culturally safe and happy to use the health service, or as places where they utilise the service provided with a great deal of effort, angst and energy. It is important to understand how these complex notions play out in rural communities if the health and wellbeing of Aboriginal people is going to be addressed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

International assessments of student science achievement, and growing evidence of students' waning interest in school science, have ensured that the development of scientific literacy continues to remain an important educational priority. Furthermore, researchers have called for teaching and learning strategies to engage students in the learning of science, particularly in the middle years of schooling. This study extends previous national and international research that has established a link between writing and learning science. Specifically, it investigates the learning experiences of eight intact Year 9 science classes as they engage in the writing of short stories that merge scientific and narrative genres (i.e., hybridised scientific narratives) about the socioscientific issue of biosecurity. This study employed a triangulation mixed methods research design, generating both quantitative and qualitative data, in order to investigate three research questions that examined the extent to which the students' participation in the study enhanced their scientific literacy; the extent to which the students demonstrated conceptual understanding of related scientific concepts through their written artefacts and in interviews about the artefacts; and the extent to which the students' participation in the project influenced their attitudes toward science and science learning. Three aspects of scientific literacy were investigated in this study: conceptual science understandings (a derived sense of scientific literacy), the students' transformation of scientific information in written stories about biosecurity (simple and expanded fundamental senses of scientific literacy), and attitudes toward science and science learning. The stories written by students in a selected case study class (N=26) were analysed quantitatively using a series of specifically-designed matrices that produce numerical scores that reflect students' developing fundamental and derived senses of scientific literacy. All students (N=152) also completed a Likert-style instrument (i.e., BioQuiz), pretest and posttest, that examined their interest in learning science, science self-efficacy, their perceived personal and general value of science, their familiarity with biosecurity issues, and their attitudes toward biosecurity. Socioscientific issues (SSI) education served as a theoretical framework for this study. It sought to investigate an alternative discourse with which students can engage in the context of SSI education, and the role of positive attitudes in engaging students in the negotiation of socioscientific issues. Results of the study have revealed that writing BioStories enhanced selected aspects of the participants' attitudes toward science and science learning, and their awareness and conceptual understanding of issues relating to biosecurity. Furthermore, the students' written artefacts alone did not provide an accurate representation of the level of their conceptual science understandings. An examination of these artefacts in combination with interviews about the students' written work provided a more comprehensive assessment of their developing scientific literacy. These findings support extensive calls for the utilisation of diversified writing-to-learn strategies in the science classroom, and therefore make a significant contribution to the writing-to-learn science literature, particularly in relation to the use of hybridised scientific genres. At the same time, this study presents the argument that the writing of hybridised scientific narratives such as BioStories can be used to complement the types of written discourse with which students engage in the negotiation of socioscientific issues, namely, argumentation, as the development of positive attitudes toward science and science learning can encourage students' participation in the discourse of science. The implications of this study for curricular design and implementation, and for further research, are also discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Art is most often at the margins of community life, seen as a distraction or entertainment only; an individual’s whim. It is generally seen as without a useful role to play in that community. This is a perception of grown-ups; children seem readily to accept an engagement with art making. Our research has shown that when an individual is drawn into a crafted art project where they have an actual involvement with the direction and production of the art work, then they become deeply engaged on multiple levels. This is true of all age groups. Artists skilled in community collaboration are able to produce art of value that transcends the usual judgements of worth. It gives people a licence to unfetter their imagination and then cooperatively be drawn back to a reachable visual solution. If you engage with children in a community, you engage the extended family at some point. The primary methodology was to produce a series of educationally valid projects at the Cherbourg State School that had a resonance into that community, then revisit and refine them where necessary and develop a new series that extended all of the positive aspects of them. This was done over a period of five years. The art made during this time is excellent. The children know it, as do their families, staff at the school, members of the local community and the others who have viewed it in exhibitions in far places like Brisbane and Melbourne. This art and the way it has been made has been acknowledged as useful by the children, teachers and the community, in educational and social terms. The school is a better place to be. This has been acknowledged by the children, teachers and the community The art making of the last five years has become an integral part of the way the school now operates and the influence of that has begun to seep into other parts of the community. Art needs to be taken from the margins and put to work at the centre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Police work tasks are diverse and require the ability to take command, demonstrate leadership, make serious decisions and be self directed (Beck, 1999; Brunetto & Farr-Wharton, 2002; Howard, Donofrio & Boles, 2002). This work is usually performed in pairs or sometimes by an officer working alone. Operational police work is seldom performed under the watchful eyes of a supervisor and a great amount of reliance is placed on the high levels of motivation and professionalism of individual officers. Research has shown that highly motivated workers produce better outcomes (Whisenand & Rush, 1998; Herzberg, 2003). It is therefore important that Queensland police officers are highly motivated to provide a quality service to the Queensland community. This research aims to identify factors which motivate Queensland police to perform quality work. Researchers acknowledge that there is a lack of research and knowledge in regard to the factors which motivate police (Beck, 1999; Bragg, 1998; Howard, Donofrio & Boles, 2002; McHugh & Verner, 1998). The motivational factors were identified in regard to the demographic variables of; age, sex, rank, tenure and education. The model for this research is Herzberg’s two-factor theory of workplace motivation (1959). Herzberg found that there are two broad types of workplace motivational factors; those driven by a need to prevent loss or harm and those driven by a need to gain personal satisfaction or achievement. His study identified 16 basic sub-factors that operate in the workplace. The research utilised a questionnaire instrument based on the sub-factors identified by Herzberg (1959). The questionnaire format consists of an initial section which sought demographic information about the participant and is followed by 51 Likert scale questions. The instrument is an expanded version of an instrument previously used in doctoral studies to identify sources of police motivation (Holden, 1980; Chiou, 2004). The questionnaire was forwarded to approximately 960 police in the Brisbane, Metropolitan North Region. The data were analysed using Factor Analysis, MANOVAs, ANOVAs and multiple regression analysis to identify the key sources of police motivation and to determine the relationships between demographic variables such as: age, rank, educational level, tenure, generation cohort and motivational factors. A total of 484 officers responded to the questionnaire from the sample population of 960. Factor analysis revealed five broad Prime Motivational Factors that motivate police in their work. The Prime Motivational Factors are: Feeling Valued, Achievement, Workplace Relationships, the Work Itself and Pay and Conditions. The factor Feeling Valued highlighted the importance of positive supportive leaders in motivating officers. Many officers commented that supervisors who only provided negative feedback diminished their sense of feeling valued and were a key source of de-motivation. Officers also frequently commented that they were motivated by operational police work itself whilst demonstrating a strong sense of identity with their team and colleagues. The study showed a general need for acceptance by peers and an idealistic motivation to assist members of the community in need and protect victims of crime. Generational cohorts were not found to exert a significant influence on police motivation. The demographic variable with the single greatest influence on police motivation was tenure. Motivation levels were found to drop dramatically during the first two years of an officer’s service and generally not improve significantly until near retirement age. The findings of this research provide the foundation of a number of recommendations in regard to police retirement, training and work allocation that are aimed to improve police motivation levels. The five Prime Motivational Factor model developed in this study is recommended for use as a planning tool by police leaders to improve motivational and job-satisfaction components of police Service policies. The findings of this study also provide a better understanding of the current sources of police motivation. They are expected to have valuable application for Queensland police human resource management when considering policies and procedures in the areas of motivation, stress reduction and attracting suitable staff to specific areas of responsibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper argues that management education needs to consider a trend in learning design which advances more creative learning through an alliance with art-based pedagogical processes. A shift is required from skills training to facilitating transformational learning through experiences that expand human potential, facilitated by artistic processes. In this paper the authors discuss the necessity for creativity and innovation in the workplace and the need to develop better leaders and managers. The inclusion of arts-based processes enhances artful behaviour, aesthetics and creativity within management and organisational behaviour, generating important implications for business innovation. This creative learning focus stems from an analysis of an arts-based intervention for management development. Entitled Management Jazz the program was conducted over three years at a large Australian University. The paper reviews some of the salient literature in the field. It considers four stages of the learning process: capacity, artful event, increased capability, and application/action to produce product. One illustrative example of an arts-based learning process is provided from the Management Jazz program. Research findings indicate that artful learning opportunities enhance capacity for awareness of creativity in one’s self and in others. This capacity correlates positively with a perception that engaging in artful learning enhances the capability of managers in changing collaborative relationships and habitat constraint. The authors conclude that it is through engagement and creative alliance with the arts that management education can explore and discover artful approaches to building creativity and innovation. The illustration presented in this paper will be delivered as a brief workshop at the Fourth Art of Management Conference. The process of bricolage and articles at hand will be used to explore creative constraints and prototypes while generating group collaboration. The mini-workshop will conclude with discussion of the arts-based process and capability enhancement outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation develops the model of a prototype system for the digital lodgement of spatial data sets with statutory bodies responsible for the registration and approval of land related actions under the Torrens Title system. Spatial data pertain to the location of geographical entities together with their spatial dimensions and are classified as point, line, area or surface. This dissertation deals with a sub-set of spatial data, land boundary data that result from the activities performed by surveying and mapping organisations for the development of land parcels. The prototype system has been developed, utilising an event-driven paradigm for the user-interface, to exploit the potential of digital spatial data being generated from the utilisation of electronic techniques. The system provides for the creation of a digital model of the cadastral network and dependent data sets for an area of interest from hard copy records. This initial model is calibrated on registered control and updated by field survey to produce an amended model. The field-calibrated model then is electronically validated to ensure it complies with standards of format and content. The prototype system was designed specifically to create a database of land boundary data for subsequent retrieval by land professionals for surveying, mapping and related activities. Data extracted from this database are utilised for subsequent field survey operations without the need to create an initial digital model of an area of interest. Statistical reporting of differences resulting when subsequent initial and calibrated models are compared, replaces the traditional checking operations of spatial data performed by a land registry office. Digital lodgement of survey data is fundamental to the creation of the database of accurate land boundary data. This creation of the database is fundamental also to the efficient integration of accurate spatial data about land being generated by modem technology such as global positioning systems, and remote sensing and imaging, with land boundary information and other information held in Government databases. The prototype system developed provides for the delivery of accurate, digital land boundary data for the land registration process to ensure the continued maintenance of the integrity of the cadastre. Such data should meet also the more general and encompassing requirements of, and prove to be of tangible, longer term benefit to the developing, electronic land information industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an environment where it has become increasingly difficult to attract consumer attention, marketers have begun to explore alternative forms of marketing communication. One such form that has emerged is product placement, which has more recently appeared in electronic games. Given changes in media consumption and the growth of the games industry, it is not surprising that games are being exploited as a medium for promotional content. Other market developments are also facilitating and encouraging their use, in terms of both the insertion of brand messages into video games and the creation of brand-centred environments, labelled ‘advergames’. However, while there is much speculation concerning the beneficial outcomes for marketers, there remains a lack of academic work in this area and little empirical evidence of the actual effects of this form of promotion on game players. Only a handful of studies are evident in the literature, which have explored the influence of game placements on consumers. The majority have studied their effect on brand awareness, largely demonstrating that players can recall placed brands. Further, most research conducted to date has focused on computer and online games, but consoles represent the dominant platform for play (Taub, 2004). Finally, advergames have largely been neglected, particularly those in a console format. Widening the gap in the literature is the fact that insufficient academic attention has been given to product placement as a marketing communication strategy overall, and to games in general. The unique nature of the strategy also makes it difficult to apply existing literature to this context. To address a significant need for information in both the academic and business domains, the current research investigates the effects of brand and product placements in video games and advergames on consumer attitude to the brand and corporate image. It was conducted in two stages. Stage one represents a pilot study. It explored the effects of use simulated and peripheral placements in video games on players’ and observers’ attitudinal responses, and whether these are influenced by involvement with a product category or skill level in the game. The ability of gamers to recall placed brands was also examined. A laboratory experiment was employed with a small sample of sixty adult subjects drawn from an Australian east-coast university, some of who were exposed to a console video game on a television set. The major finding of study one is that placements in a video game have no effect on gamers’ attitudes, but they are recalled. For stage two of the research, a field experiment was conducted with a large, random sample of 350 student respondents to investigate the effects on players of brand and product placements in handheld video games and advergames. The constructs of brand attitude and corporate image were again tested, along with several potential confounds. Consistent with the pilot, the results demonstrate that product placement in electronic games has no effect on players’ brand attitudes or corporate image, even when allowing for their involvement with the product category, skill level in the game, or skill level in relation to the medium. Age and gender also have no impact. However, the more interactive a player perceives the game to be, the higher their attitude to the placed brand and corporate image of the brand manufacturer. In other words, when controlling for perceived interactivity, players experienced more favourable attitudes, but the effect was so weak it probably lacks practical significance. It is suggested that this result can be explained by the existence of excitation transfer, rather than any processing of placed brands. The current research provides strong, empirical evidence that brand and product placements in games do not produce strong attitudinal responses. It appears that the nature of the game medium, game playing experience and product placement impose constraints on gamer motivation, opportunity and ability to process these messages, thereby precluding their impact on attitude to the brand and corporate image. Since this is the first study to investigate the ability of video game and advergame placements to facilitate these deeper consumer responses, further research across different contexts is warranted. Nevertheless, the findings have important theoretical and managerial implications. This investigation makes a number of valuable contributions. First, it is relevant to current marketing practice and presents findings that can help guide promotional strategy decisions. It also presents a comprehensive review of the games industry and associated activities in the marketplace, relevant for marketing practitioners. Theoretically, it contributes new knowledge concerning product placement, including how it should be defined, its classification within the existing communications framework, its dimensions and effects. This is extended to include brand-centred entertainment. The thesis also presents the most comprehensive analysis available in the literature of how placements appear in games. In the consumer behaviour discipline, the research builds on theory concerning attitude formation, through application of MacInnis and Jaworski’s (1989) Integrative Attitude Formation Model. With regards to the games literature, the thesis provides a structured framework for the comparison of games with different media types; it advances understanding of the game medium, its characteristics and the game playing experience; and provides insight into console and handheld games specifically, as well as interactive environments generally. This study is the first to test the effects of interactivity in a game environment, and presents a modified scale that can be used as part of future research. Methodologically, it addresses the limitations of prior research through execution of a field experiment and observation with a large sample, making this the largest study of product placement in games available in the literature. Finally, the current thesis offers comprehensive recommendations that will provide structure and direction for future study in this important field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research used the Queensland Police Service, Australia, as a major case study. Information on principles, techniques and processes used, and the reason for the recording, storing and release of audit information for evidentiary purposes is reported. It is shown that Law Enforcement Agencies have a two-fold interest in, and legal obligation pertaining to, audit trails. The first interest relates to the situation where audit trails are actually used by criminals in the commission of crime and the second to where audit trails are generated by the information systems used by the police themselves in support of the recording and investigation of crime. Eleven court cases involving Queensland Police Service audit trails used in evidence in Queensland courts were selected for further analysis. It is shown that, of the cases studied, none of the evidence presented was rejected or seriously challenged from a technical perspective. These results were further analysed and related to normal requirements for trusted maintenance of audit trail information in sensitive environments with discussion on the ability and/or willingness of courts to fully challenge, assess or value audit evidence presented. Managerial and technical frameworks for firstly what is considered as an environment where a computer system may be considered to be operating “properly” and, secondly, what aspects of education, training, qualifications, expertise and the like may be considered as appropriate for persons responsible within that environment, are both proposed. Analysis was undertaken to determine if audit and control of information in a high security environment, such as law enforcement, could be judged as having improved, or not, in the transition from manual to electronic processes. Information collection, control of processing and audit in manual processes used by the Queensland Police Service, Australia, in the period 1940 to 1980 was assessed against current electronic systems essentially introduced to policing in the decades of the 1980s and 1990s. Results show that electronic systems do provide for faster communications with centrally controlled and updated information readily available for use by large numbers of users who are connected across significant geographical locations. However, it is clearly evident that the price paid for this is a lack of ability and/or reluctance to provide improved audit and control processes. To compare the information systems audit and control arrangements of the Queensland Police Service with other government departments or agencies, an Australia wide survey was conducted. Results of the survey were contrasted with the particular results of a survey, conducted by the Australian Commonwealth Privacy Commission four years previous, to this survey which showed that security in relation to the recording of activity against access to information held on Australian government computer systems has been poor and a cause for concern. However, within this four year period there is evidence to suggest that government organisations are increasingly more inclined to generate audit trails. An attack on the overall security of audit trails in computer operating systems was initiated to further investigate findings reported in relation to the government systems survey. The survey showed that information systems audit trails in Microsoft Corporation's “Windows” operating system environments are relied on quite heavily. An audit of the security for audit trails generated, stored and managed in the Microsoft “Windows 2000” operating system environment was undertaken and compared and contrasted with similar such audit trail schemes in the “UNIX” and “Linux” operating systems. Strength of passwords and exploitation of any security problems in access control were targeted using software tools that are freely available in the public domain. Results showed that such security for the “Windows 2000” system is seriously flawed and the integrity of audit trails stored within these environments cannot be relied upon. An attempt to produce a framework and set of guidelines for use by expert witnesses in the information technology (IT) profession is proposed. This is achieved by examining the current rules and guidelines related to the provision of expert evidence in a court environment, by analysing the rationale for the separation of distinct disciplines and corresponding bodies of knowledge used by the Medical Profession and Forensic Science and then by analysing the bodies of knowledge within the discipline of IT itself. It is demonstrated that the accepted processes and procedures relevant to expert witnessing in a court environment are transferable to the IT sector. However, unlike some discipline areas, this analysis has clearly identified two distinct aspects of the matter which appear particularly relevant to IT. These two areas are; expertise gained through the application of IT to information needs in a particular public or private enterprise; and expertise gained through accepted and verifiable education, training and experience in fundamental IT products and system.