753 resultados para new product performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an environment where it has become increasingly difficult to attract consumer attention, marketers have begun to explore alternative forms of marketing communication. One such form that has emerged is product placement, which has more recently appeared in electronic games. Given changes in media consumption and the growth of the games industry, it is not surprising that games are being exploited as a medium for promotional content. Other market developments are also facilitating and encouraging their use, in terms of both the insertion of brand messages into video games and the creation of brand-centred environments, labelled ‘advergames’. However, while there is much speculation concerning the beneficial outcomes for marketers, there remains a lack of academic work in this area and little empirical evidence of the actual effects of this form of promotion on game players. Only a handful of studies are evident in the literature, which have explored the influence of game placements on consumers. The majority have studied their effect on brand awareness, largely demonstrating that players can recall placed brands. Further, most research conducted to date has focused on computer and online games, but consoles represent the dominant platform for play (Taub, 2004). Finally, advergames have largely been neglected, particularly those in a console format. Widening the gap in the literature is the fact that insufficient academic attention has been given to product placement as a marketing communication strategy overall, and to games in general. The unique nature of the strategy also makes it difficult to apply existing literature to this context. To address a significant need for information in both the academic and business domains, the current research investigates the effects of brand and product placements in video games and advergames on consumer attitude to the brand and corporate image. It was conducted in two stages. Stage one represents a pilot study. It explored the effects of use simulated and peripheral placements in video games on players’ and observers’ attitudinal responses, and whether these are influenced by involvement with a product category or skill level in the game. The ability of gamers to recall placed brands was also examined. A laboratory experiment was employed with a small sample of sixty adult subjects drawn from an Australian east-coast university, some of who were exposed to a console video game on a television set. The major finding of study one is that placements in a video game have no effect on gamers’ attitudes, but they are recalled. For stage two of the research, a field experiment was conducted with a large, random sample of 350 student respondents to investigate the effects on players of brand and product placements in handheld video games and advergames. The constructs of brand attitude and corporate image were again tested, along with several potential confounds. Consistent with the pilot, the results demonstrate that product placement in electronic games has no effect on players’ brand attitudes or corporate image, even when allowing for their involvement with the product category, skill level in the game, or skill level in relation to the medium. Age and gender also have no impact. However, the more interactive a player perceives the game to be, the higher their attitude to the placed brand and corporate image of the brand manufacturer. In other words, when controlling for perceived interactivity, players experienced more favourable attitudes, but the effect was so weak it probably lacks practical significance. It is suggested that this result can be explained by the existence of excitation transfer, rather than any processing of placed brands. The current research provides strong, empirical evidence that brand and product placements in games do not produce strong attitudinal responses. It appears that the nature of the game medium, game playing experience and product placement impose constraints on gamer motivation, opportunity and ability to process these messages, thereby precluding their impact on attitude to the brand and corporate image. Since this is the first study to investigate the ability of video game and advergame placements to facilitate these deeper consumer responses, further research across different contexts is warranted. Nevertheless, the findings have important theoretical and managerial implications. This investigation makes a number of valuable contributions. First, it is relevant to current marketing practice and presents findings that can help guide promotional strategy decisions. It also presents a comprehensive review of the games industry and associated activities in the marketplace, relevant for marketing practitioners. Theoretically, it contributes new knowledge concerning product placement, including how it should be defined, its classification within the existing communications framework, its dimensions and effects. This is extended to include brand-centred entertainment. The thesis also presents the most comprehensive analysis available in the literature of how placements appear in games. In the consumer behaviour discipline, the research builds on theory concerning attitude formation, through application of MacInnis and Jaworski’s (1989) Integrative Attitude Formation Model. With regards to the games literature, the thesis provides a structured framework for the comparison of games with different media types; it advances understanding of the game medium, its characteristics and the game playing experience; and provides insight into console and handheld games specifically, as well as interactive environments generally. This study is the first to test the effects of interactivity in a game environment, and presents a modified scale that can be used as part of future research. Methodologically, it addresses the limitations of prior research through execution of a field experiment and observation with a large sample, making this the largest study of product placement in games available in the literature. Finally, the current thesis offers comprehensive recommendations that will provide structure and direction for future study in this important field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The law and popular opinion expect boards of directors will actively monitor their organisations. Further, public opinion is that boards should have a positive impact on organisational performance. However, the processes of board monitoring and judgment are poorly understood, and board influence on organisational performance needs to be better understood. This thesis responds to the repeated calls to open the ‘black box’ linking board practices and organisational performance by investigating the processual behaviours of boards. The work of four boards1 of micro and small-sized nonprofit organisations were studied for periods of at least one year, using a processual research approach, drawing on observations of board meetings, interviews with directors, and the documents of the boards. The research shows that director turnover, the difficulty recruiting and engaging directors, and the administration of reporting, had strong impacts upon board monitoring, judging and/or influence. In addition, board monitoring of organisational performance was adversely affected by directors’ limited awareness of their legal responsibilities and directors’ limited financial literacy. Directors on average found all sources of information about their organisation’s work useful. Board judgments about the financial aspects of organisational performance were regulated by the routines of financial reporting. However, there were no comparable routines facilitating judgments about non-financial performance, and such judgments tended to be limited to specific aspects of performance and were ad hoc, largely in response to new information or the repackaging of existing information in a new form. The thesis argues that Weick’s theory of sensemaking offers insight into the way boards went about the task of understanding organisational performance. Board influence on organisational performance was demonstrated in the areas of: compliance; instrumental influence through service and through discussion and decision-making; and by symbolic, legitimating and protective means. The degree of instrumental influence achieved by boards depended on director competency, access to networks of influence, and understandings of board roles, and by the agency demonstrated by directors. The thesis concludes that there is a crowding out effect whereby CEO competence and capability limits board influence. The thesis also suggests that there is a second ‘agency problem’, a problem of director volition. The research potentially has profound implications for the work of nonprofit boards. Rather than purporting to establish a general theory of board governance, the thesis embraces calls to build situation-specific mini-theories about board behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. The solder joint inspection problem is more challenging than many other visual inspections because of the variability in the appearance of solder joints. Although many research works and various techniques have been developed to classify defect in solder joints, these methods have complex systems of illumination for image acquisition and complicated classification algorithms. An important stage of the analysis is to select the right method for the classification. Better inspection technologies are needed to fill the gap between available inspection capabilities and industry systems. This dissertation aims to provide a solution that can overcome some of the limitations of current inspection techniques. This research proposes two inspection steps for automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localization and segmentation. The illumination normalisation approach can effectively and efficiently eliminate the effect of uneven illumination while keeping the properties of the processed image. The “back-end” inspection involves the classification of solder joints by using Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. Further testing demonstrates the advantage of Log Gabor filter over both Discrete Wavelet Transform and Discrete Cosine Transform. Classifier score fusion is analysed for improving recognition rate. Experimental results demonstrate that the proposed system improves performance and robustness in terms of classification rates. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. In fact, the choice of suitable features allows one to overcome the problem given by the use of non complex illumination systems. The new system proposed in this research can be incorporated in the development of an automated non-contact, non-destructive and low cost solder joint quality inspection system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contemporary mainstream theatre audiences observe etiquette strictures that regulate behaviour. As Baz Kershaw argues, “the idea of the passive audience for performance has been associated usually with mainstream theatre.” This paper explores a mainstream event where the extant contract of audience silence was replaced with a raw, emotional audience response that continued into the post-performance discussion. William Gibson’s The Miracle Worker was performed by Crossbow Productions at the Brisbane Powerhouse to an audience made up of mainstream theatre patrons and people living with hearing and visual impairment. Various elements such as shadow signing and tactile tours worked metatheatrically and self-referentially to heighten audience awareness. During the performances the verbal and non-verbal responses of the audience were so pervasive that the audience became not only co-creators of the performance text but performers of a rich audience text that had a dramatic impact on the theatrical experience for audience and actors alike. During the post-performance discussion the audience performers spilled onto the stage interacting with the actors, extending the pleasure of the experience. This paper discusses how in privileging the audience as co-creators and performers, the chasm between stage and audience was bridged. The audiences’ performance changed, enriched and created new meanings for each performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coal seam gas (CSG) exploration and development requires the abstraction of significant amounts of water. This is so because gas desorbtion in coal seams takes place only after aquifer pressure has been reduced by prolonged pumping of aquifer water. CSG waters have a specific geochemical signature which is a product of their formation process. These waters have high bicarbonate, high sodium, low calcium, low magnesium, and very low sulphate concentrations. Additionally, chloride concentrations may be high depending on the coal depositional environment. This particular signature is not only useful for exploration purposes, but it also highlights potential environmental issues that can arise as a consequence of CSG water disposal. Since 2002 L&M Coal Seam Gas Ltd and CRL Energy Ltd, have been involved in exploration and development of CSG in New Zealand. Anticipating disposal of CSG waters as a key issue in CSG development, they have been assessing CSG water quality along with exploration work. Coal seam gas water samples from an exploration well in Maramarua closely follow the geochemical signature associated with CSG waters. This has helped to identify CSG potential, while at the same time assessing the chemical characteristics and water generation processes in the aquifer. Neutral pH and high alkalinity suggest that these waters could be easily managed once the sodium and chloride concentrations are reduced to acceptable levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the use of renewable energy sources (RESs) increases worldwide, there is a rising interest on their impacts on power system operation and control. An overview of the key issues and new challenges on frequency regulation concerning the integration of renewable energy units into the power systems is presented. Following a brief survey on the existing challenges and recent developments, the impact of power fluctuation produced by variable renewable sources (such as wind and solar units) on sysstem frequency performance is also presented. An updated LFC model is introduced, and power system frequency response in the presence of RESs and associated issues is analysed. The need for the revising of frequency performance standards is emphasised. Finally, non-linear time-domain simulations on the standard 39-bus and 24-bus test systems show that the simulated results agree with those predicted analytically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless network technologies, such as IEEE 802.11 based wireless local area networks (WLANs), have been adopted in wireless networked control systems (WNCS) for real-time applications. Distributed real-time control requires satisfaction of (soft) real-time performance from the underlying networks for delivery of real-time traffic. However, IEEE 802.11 networks are not designed for WNCS applications. They neither inherently provide quality-of-service (QoS) support, nor explicitly consider the characteristics of the real-time traffic on networked control systems (NCS), i.e., periodic round-trip traffic. Therefore, the adoption of 802.11 networks in real-time WNCSs causes challenging problems for network design and performance analysis. Theoretical methodologies are yet to be developed for computing the best achievable WNCS network performance under the constraints of real-time control requirements. Focusing on IEEE 802.11 distributed coordination function (DCF) based WNCSs, this paper analyses several important NCS network performance indices, such as throughput capacity, round trip time and packet loss ratio under the periodic round trip traffic pattern, a unique feature of typical NCSs. Considering periodic round trip traffic, an analytical model based on Markov chain theory is developed for deriving these performance indices under a critical real-time traffic condition, at which the real-time performance constraints are marginally satisfied. Case studies are also carried out to validate the theoretical development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In general, the performance of construction projects, including their sustainability performance, does not meet optimal expectations. One aspect of this is the performance of the participants who are independent and make a significance impact on overall project outcomes. Of these participants, the client is traditionally the owner of the project, the architect or engineer is engaged as the lead designer and a contractor is selected to construct the facilities. Generally, the performance of the participants is gauged by considering three main factors, namely, time, cost and quality. As the level of satisfaction is a subjective issue, it is rarely used in the performance evaluation of construction work. Recently, various approaches to the measurement of satisfaction have been made in an attempt to determine the performance of construction project outcomes - for instance, client satisfaction, customer satisfaction, contractor satisfaction, occupant satisfaction and home buyer satisfaction. These not only identify the performance of the construction project but are also used to improve and maintain relationships. In addition, these assessments are necessary for the continuous improvement and enhanced cooperation of participants. The measurement of satisfaction levels primarily involves expectations and perceptions. An expectation can be regarded as a comparative standard of different needs, motives and beliefs, while a perception is a subjective interpretation that is influenced by moods, experiences and values. This suggests that the disparity between perceptions and expectations may possibly be used to represent different levels of satisfaction. However, this concept is rather new and in need of further investigation. This chapter examines the methods commonly practised in measuring satisfaction levels today and the advantages of promoting these methods. The results provide a preliminary review of the advantages of satisfaction measurement in the construction industry and recommendations are made concerning the most appropriate methods to use in identifying the performance of project outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organisations face increasing competition from new firms in emerging markets and their past superior products may no longer provide competitive advantage in markets based on different cost and value differentials. A shift in design practices from product solutions to health services which are accessible and affordable by all is required. This paper explores a design led approach to innovation to assist medical device companies develop new services and experiences and reshape their notions of the nature, development and deployment of health care services. This approach uses design tools and methodologies that are grounded in the authentic understandings of stakeholder experiences, to assist an organisation create a vision of likely future health care scenarios. Through this process, organisations can explore the complexities in the delivery of future health care services in new and emerging markets allowing them to tailor product and service solutions which focus on being accessible and affordable by all. The industry based case study for the design of health services in carried out in emerging economies. The contribution of this work in advancing research into design innovation and future research directions are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of the greenhouse gas emissions implications of the market dominating electric hot water systems, governments in Australia have implemented policies and programs to encourage the uptake of solar water heaters (SWHs) in the residential market as part of climate change adaptation and mitigation strategies. The cost-benefit analysis that usually accompanies all government policy and program design could be simplistically reduced to the ratio of expected greenhouse gas reductions of SWH to the cost of a SWH. The national Register of Solar Water Heaters specifies how many renewable energy certificates (RECs) are allocated to complying SWHs according to their expected performance, and hence greenhouse gas reductions, in different climates. Neither REC allocations nor rebates are tied to actual performance of systems. This paper examines the performance of instantaneous gas-boosted solar water heaters installed in new residences in a housing estate in south-east Queensland in the period 2007 – 2010. The evidence indicates systemic failures in installation practices, resulting in zero solar performance or dramatic underperformance (estimated average 43% solar contribution). The paper will detail the faults identified, and how these faults were eventually diagnosed and corrected. The impacts of these system failures on end-use consumers are discussed before concluding with a brief overview of areas where further research is required in order to more fully understand whole of supply chain implications.