131 resultados para termination of no win no fee retainer
Resumo:
Businesses in various consumer service industries have begun to unbundle their service offerings by introducing numerous fees for products and services that were previously provided as “free.” Anecdotal evidence in the media indicates that these fees cause widespread public displeasure, frustration, and outrage. This paper develops a framework of fee acceptability, negative emotions, and dysfunctional customer behavior, which is tested using data from the airline industry. Findings identify the strongest effects on betrayal in the case of baggage fees, followed by charges for comfort. Also, betrayal has a direct effect on complaining, whereas anger mediates the relationship between betrayal and negative word of mouth.
Resumo:
The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.
Resumo:
This thesis aimed to compare the effects of constraints-led and traditional coaching approaches on young cricket spin bowlers, with a specific research focus on increasing spin rates (i.e., Revolutions per Minute). Participants were 22 spin bowlers from either an Australia state youth squad or an academy in England. Results indicate that adopting a constraints-led approach can benefit younger, inexperienced bowlers, whilst a traditional approach may assist more skilled, older bowlers. The findings are discussed with regards to how they may inform the learning design of training programs by cricket coaches.
Resumo:
This paper reports on the design, implementation and outcomes of a mentoring program involving 18 employees in the IT Division of WorkCover Queensland. The paper provides some background information to the development of the program and the design and implementation phases including recruitment and matching of participants, orientation and training, and the mentoring process including transition and/or termination. The paper also outlines the quantitative and qualitative evaluation processes that occurred and the outcomes of that evaluation. Results indicated a wealth of positive individual, mentoring, and organisational outcomes. The organisation and semi-structured processes provided in the program are considered as major contributing factors to the successful outcomes of the program. These outcomes are likely to have long-term benefits for the individuals involved, the IT Division, and the broader organisation
Resumo:
This paper presents aspects of a longitudinal study in the design and practice of Internet meetings between farmer their advisors and researchers in rural Australia. It reports on the use of Microsoft NetMeeting (NM) by a group of agricultural researchers from Australia's CSIRO (Commonwealth Scientific and Industrial Research Organisation) for regular meetings, over nine years, with farmers and the commercial advisers. It describes lessons drawn from this experience about the conditions under which telecollaborative tools, such as NM and video conferencing, are likely to be both useful and used.
Resumo:
Designers and artists have integrated recent advances in interactive, tangible and ubiquitous computing technologies to create new forms of interactive environments in the domains of work, recreation, culture and leisure. Many designs of technology systems begin with the workplace in mind, and with function, ease of use, and efficiency high on the list of priorities. [1] These priorities do not fit well with works designed for an interactive art environment, where the aims are many, and where the focus on utility and functionality is to support a playful, ambiguous or even experimental experience for the participants. To evaluate such works requires an integration of art-criticism techniques with more recent Human Computer Interaction (HCI) methods, and an understanding of the different nature of engagement in these environments. This paper begins a process of mapping a set of priorities for amplifying engagement in interactive art installations. I first define the concept of ludic engagement and its usefulness as a lens for both design and evaluation in these settings. I then detail two fieldwork evaluations I conducted within two exhibitions of interactive artworks, and discuss their outcomes and the future directions of this research.
Resumo:
This article takes a critical discourse approach to one aspect of the Australian WorkChoices industrial relations legislation: the government’s major advertisement published in national newspapers in late 2005 and released simultaneously as a 16-page booklet. This strategic move was the initial stage of one of the largest ‘information’ campaigns ever mounted by an Australian government, costing more than $AUD137 million. This article analyse the semiotic (visual and graphic) elements of the advertisement to uncover what these elements contribute to the message, particularly through their construction of both an image of the legislation and a portrayal of the Australian worker. We argue for the need to fuse approaches from critical discourse studies and social semiotics to deepen understanding of industrial relations phenomena such as the ‘hard sell’ to win the hearts and minds of citizens regarding unpopular new legislation.
Resumo:
Hope is a word that has re-emerged in light of Obama's stunning win in the United States election. In this time of economic gloom and the reality of bleak recession and unprecedented job losses the United States has embraced the hopeful message of Barack Obama. For many years 'hope' has been a word that has been lost, forgotten , and banished to the margins of romantic longing and wishful thinking. Hope is also a word that has been much discussed in relation to the iconic The Great Gatsby but usually in a negative fashion to demonstrate the unattainability of the American dream. Marcella Taylor called Gatsby "the unfinished American Epic" which focused on the "passing of the last utopian frontier" and suggested the significance of this passing on American society as a whole. In the last months, however, hope has made a return and one gets the feeling that Fitzgerald's words "but that's no matter-to-morrow we will run faster, stretch out our arms farther . . . And one fine morning' are once again being heard.
Resumo:
Botnets are large networks of compromised machines under the control of a bot master. These botnets constantly evolve their defences to allow the continuation of their malicious activities. The constant development of new botnet mitigation strategies and their subsequent defensive countermeasures has lead to a technological arms race, one which the bot masters have significant incentives to win. This dissertation analyzes the current and future states of the botnet arms race by introducing a taxonomy of botnet defences and a simulation framework for evaluating botnet techniques. The taxonomy covers current botnet techniques and highlights possible future techniques for further analysis under the simulation framework. This framework allows the evaluation of the effect techniques such as reputation systems and proof of work schemes have on the resources required to disable a peer-to-peer botnet. Given the increase in the resources required, our results suggest that the prospects of eliminating the botnet threat are limited.
Resumo:
Using information and communication technology devices in public urban places can help to create a personalised space. Looking at a mobile phone screen or listening to music on an MP3 player is a common practice avoiding direct contact with others e.g. whilst using public transport. However, such devices can also be utilised to explore how to build new meaningful connections with the urban space and the collocated people within. We present findings of work-in-progress on Capital Music, a mobile application enabling urban dwellers to listen to music songs as usual, but also allowing them to announce song titles and discover songs currently being listened to by other people in the vicinity. We study the ways that this tool can change or even enhance people’s experience of public urban spaces. Our first user study also found changes in choosing different songs. Anonymous social interactions based on users’ music selection are implemented in the first iteration of the prototype that we studied.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
The present paper focuses on some interesting classes of process-control games, where winning essentially means successfully controlling the process. A master for one of these games is an agent who plays a winning strategy. In this paper we investigate situations in which even a complete model (given by a program) of a particular game does not provide enough information to synthesize—even incrementally—a winning strategy. However, if in addition to getting a program, a machine may also watch masters play winning strategies, then the machine is able to incrementally learn a winning strategy for the given game. Studied are successful learning from arbitrary masters and from pedagogically useful selected masters. It is shown that selected masters are strictly more helpful for learning than are arbitrary masters. Both for learning from arbitrary masters and for learning from selected masters, though, there are cases where one can learn programs for winning strategies from masters but not if one is required to learn a program for the master's strategy itself. Both for learning from arbitrary masters and for learning from selected masters, one can learn strictly more by watching m+1 masters than one can learn by watching only m. Last, a simulation result is presented where the presence of a selected master reduces the complexity from infinitely many semantic mind changes to finitely many syntactic ones.
Resumo:
There are several noninvasive techniques for assessing the kinetics of tear film, but no comparative studies have been conducted to evaluate their efficacies. Our aim is to test and compare techniques based on high-speed videokeratoscopy (HSV), dynamic wavefront sensing (DWS), and lateral shearing interferometry (LSI). Algorithms are developed to estimate the tear film build-up time TBLD, and the average tear film surface quality in the stable phase of the interblink interval TFSQAv. Moderate but significant correlations are found between TBLD measured with LSI and DWS based on vertical coma (Pearson's r2=0.34, p<0.01) and higher order rms (r2=0.31, p<0.01), as well as between TFSQAv measured with LSI and HSV (r2=0.35, p<0.01), and between LSI and DWS based on the rms fit error (r2=0.40, p<0.01). No significant correlation is found between HSV and DWS. All three techniques estimate tear film build-up time to be below 2.5 sec, and they achieve a remarkably close median value of 0.7 sec. HSV appears to be the most precise method for measuring tear film surface quality. LSI appears to be the most sensitive method for analyzing tear film build-up.