860 resultados para pay-as-you-go
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
The purpose of this master’s thesis was to investigate the effects which benefits obtained from reading a newspaper and using its website have on behavioral outcomes such as word-of-mouth behavior and willingness to pay. Several other antecedents of willingness to pay have been used as the control variables. However, their interrelations haven’t been hypothesized. The empirical part focused on a case company – Finnish regional newspaper. Empirical research has been conducted using a quantitative method and data was collected via online survey placed on newspaper’s website during 2010. 1001 responses have been collected. The results showed that benefits obtained both from traditional printed newspaper and from online one have positive effects on the word-of-mouth about this newspaper and its website. However, it has been revealed that benefits obtained from reading the newspaper don’t have effect on the willingness to pay for this newspaper. Additionally, only interpersonal and convenience benefits obtained from using the newspaper’s website influence on the willingness to pay for it. Finally, willingness to pay for the bundle of printed newspaper and its website access is affected positively only by the information/learning benefits obtained from reading the newspaper and by the interpersonal benefits obtained from using the newspaper’s website.
Resumo:
Real option valuation, in particular the fuzzy pay-off method, has proven to be useful in defining risk and visualizing imprecision of investments in various industry applications. This study examines whether the evaluation of risk and profitability for public real estate investments can be improved by using real option methodology. Firstly, the context of real option valuation in the real estate industry is examined. Further, an empirical case study is performed on 30 real estate investments of a Finnish government enterprise in order to determine whether the presently used investment analysis system can be complemented by the pay-off method. Despite challenges in the application of the pay-off method to the case company’s large investment base, real option valuation is found to create additional value and facilitate more robust risk analysis in public real estate applications.
Resumo:
Concepts, models, or theories that end up shaping practices, whether those practices fall in the domains of science, technology, social movements, or business, always emerge through a change in language use. First, communities begin to talk differently, incorporating new vocabularies (Rorty, 1989), in their narratives. Whether the community’s new narratives respond to perceived anomalies or failures of the existing ones (Kuhn, 1962) or actually reveal inadequacies by addressing previously unrecognized practices (Fleck, 1979; Rorty, 1989) is less important here than the very phenomena that they introduce differences. Then, if the new language proves to be useful, for example, because it helps the community solve a problem or create a possibility that existing narratives do not, the new narrative will begin circulating more broadly throughout the community. If other communities learn of the usefulness of these new narratives, and find them sufficiently persuasive, they may be compelled to test, modify, and eventually adopt them. Of primary importance is the idea that a new concept or narrative perceived as useful is more likely to be adopted. We can expect that business concepts emerge through a similar pattern. Concepts such as “competitive advantage,” “disruption,” and the “resource based view,” now broadly known and accepted, were each at some point first introduced by a community. This community experimented with the concepts they introduced and found them useful. The concept “competitive advantage,” for example, helped researchers better explain why some firm’s outperformed others and helped practitioners more clearly understand what choices to make to improve the profit and growth prospects of their firms. The benefits of using these terms compelled other communities to consider, apply, and eventually adopt them as well. Were these terms not viewed as useful, they would not likely have been adopted. This thesis attempts to observe and anticipate new business concepts that may be emerging. It does so by seeking to observe a community of business practitioners that are using different language and appear to be more successful than a similar community of practitioners that are have not yet begun using this different language as extensively. It argues that if the community that is adopting new types of narratives is perceived as being more successful, their success will attract the attention of other communities who may then seek to adopt the same narratives. Specifically, this thesis compares the narratives used by a set of firms that are considered to be performing well (called Winners) with those of set of less-successful peers (called Losers). It does so with the aim of addressing two questions: - How do the strategic narratives that circulate within “winning” companies and their leaders differ from those circulating within “losing” companies and their leaders? - Given the answer to the first question: what new business strategy concepts are likely to emerge in the business community at large? I expected to observe “winning” companies shifting their language, abandoning an older set of narratives for newer ones. However the analysis indicates a more interesting dynamic: “winning” companies adopt the same core narratives as their “losing” peers with equal frequency yet they go beyond these. Both “winners” and “losers” seem to pursue economies of scale, customer captivity, best practices, and securing preferential access to resources with similar vigor. But “winners” seem to go further, applying three additional narratives in their pursuits of competitive advantage. They speak of coordinating what is uncoordinated, adopting what this thesis calls “exchanging the role of guest for that of host,” and “forcing a two-front battle” more frequently than their “loser” peers. Since these “winning” companies are likely perceived as being more successful, the unique narratives they use are more likely to be emulated and adopted. Understanding in what ways winners speak differently, therefore, gives us a glimpse into the possible future evolution of business concepts.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
This study examines the aftermath of mass violence in local communities. Two rampage school shootings that occurred in Finland are analyzed and compared to examine the ways in which communities experience, make sense of, and recover from sudden acts of mass violence. The studied cases took place at Jokela High School, in southern Finland, and at a polytechnic university in Kauhajoki, in western Finland, in 2007 and 2008 respectively. Including the perpetrators, 20 people lost their lives in these shootings. These incidents are part of the global school shooting phenomenon with increasing numbers of incidents occurring in the last two decades, mostly in North America and Europe. The dynamic of solidarity and conflict is one of the main themes of this study. It builds upon previous research on mass violence and disasters which suggests that solidarity increases after a crisis, and that this increase is often followed by conflict in the affected communities. This dissertation also draws from theoretical discussions on remembering, narrating, and commemorating traumatic incidents, as well as the idea of a cultural trauma process in which the origins and consequences of traumas are negotiated alongside collective identities. Memorialization practices and narratives about what happened are vital parts of the social memory of crises and disasters, and their inclusive and exclusive characteristics are discussed in this study. The data include two types of qualitative interviews; focused interviews with 11 crisis workers, and focused, narrative interviews with 21 residents of Jokela and 22 residents of Kauhajoki. A quantitative mail survey of the Jokela population (N=330) provided data used in one of the research articles. The results indicate that both communities experienced a process of simultaneous solidarity and conflict after the shootings. In Jokela, the community was constructed as a victim, and public expressions of solidarity and memorialization were promoted as part of the recovery process. In Kauhajoki, the community was portrayed as an incidental site of mass violence, and public expressions of solidarity by distant witnesses were labeled as unnecessary and often criticized. However, after the shooting, the community was somewhat united in its desire to avoid victimization and a prolonged liminal period. This can be understood as a more modest and invisible process of “silent solidarity”. The processes of enforced solidarity were partly made possible by exclusion. In some accounts, the family of the perpetrator in Jokela was excluded from the community. In Kauhajoki, the whole incident was externalized. In both communities, this exclusion included associating the shooting events, certain places, and certain individuals with the concept of evil, which helped to understand and explain the inconceivable incidents. Differences concerning appropriate emotional orientations, memorialization practices and the pace of the recovery created conflict in both communities. In Jokela, attitudes towards the perpetrator and his family were also a source of friction. Traditional gender roles regarding the expression of emotions remained fairly stable after the school shootings, but in an exceptional situation, conflicting interpretations arose concerning how men and women should express emotion. The results from the Jokela community also suggest that while increased solidarity was seen as important part of the recovery process, some negative effects such as collective guilt, group divisions, and stigmatization also emerged. Based on the results, two simultaneous strategies that took place after mass violence were identified; one was a process of fast-paced normalization, and the other was that of memorialization. Both strategies are ways to restore the feeling of security shattered by violent incidents. The Jokela community emphasized remembering while the Kauhajoki community turned more to the normalization strategy. Both strategies have positive and negative consequences. It is important to note that the tendency to memorialize is not the only way of expressing solidarity, as fast normalization includes its own kind of solidarity and helps prevent the negative consequences of intense solidarity.
Resumo:
Os criatórios de peixe do estado de Goiás são inúmeros e de intensa atividade recreativa. No entanto, estudos sobre as cianobactérias nesses ambientes são escassos, fato preocupante, uma vez que é comum notar-se intensa proliferação do fitoplâncton em pesqueiros, principalmente devido a ações antrópicas. O perigo consiste na formação de florações de espécies potencialmente tóxicas, principalmente de cianobactérias. Este trabalho visa inventariar as espécies planctônicas de cianobactérias ocorrentes em um pesqueiro (lago Jaó - um lago artificial raso) da área municipal de Goiânia (GO) (16º39'13" S-49º13'26" O). As amostragens foram realizadas nos períodos de seca (2003 a 2008) e chuva (2009), quando visualmente era evidente a ocorrência de florações. Foram aferidas variáveis climatológicas, morfométricas e limnológicas. O período de seca foi representativo nos anos amostrados apresentando no máximo 50 mm de precipitação mensal em 2005. Foram registrados 31 táxons de cianobactérias pertencentes aos gêneros Dolichospermum (5 spp.), Aphanocapsa (4 spp.), Microcystis (3 spp.), Pseudanabaena (3 spp.), Radiocystis (2 spp.), Oscillatoria (2 spp.), Bacularia, Coelosphaerium, Cylindrospermopsis, Geitlerinema, Glaucospira, Limnothrix, Pannus, Phormidium, Planktolyngbya, Planktothrix, Sphaerocavum e Synechocystis, esses últimos com uma espécie cada. Nos anos de 2003 a 2005 ocorreu predomínio de florações de espécies de Dolichospermum e em 2006 predominaram espécies de Microcystis, Radiocystis e Aphanocapsa. Das espécies inventariadas neste estudo, 21 são primeiras citações para o estado de Goiás e 13 foram constadas na literatura como potencialmente tóxicas.
Resumo:
Julkaisussa: Historia delle nuove Indie occidentali, con tutti i discoprimenti & cose notabili
Resumo:
The early facilitatory effect of a peripheral spatially visual prime stimulus described in the literature for simple reaction time tasks has been usually smaller than that described for complex (go/no-go, choice) reaction time tasks. In the present study we investigated the reason for this difference. In a first and a second experiment we tested the participants in both a simple task and a go/no-go task, half of them beginning with one of these tasks and half with the other one. We observed that the prime stimulus had an early effect, inhibitory for the simple task and facilitatory for the go/no-go task, when the task was performed first. No early effect appeared when the task was performed second. In a third and a fourth experiment the participants were, respectively, tested in the simple task and in the go/no-go task for four sessions (the prime stimulus was presented in the second, third and fourth sessions). The early effects of the prime stimulus did not change across the sessions, suggesting that a habituatory process was not the cause for the disappearance of these effects in the first two experiments. Our findings are compatible with the idea that different attentional strategies are adopted in simple and complex reaction time tasks. In the former tasks the gain of automatic attention mechanisms may be adjusted to a low level and in the latter tasks, to a high level. The attentional influence of the prime stimulus may be antagonized by another influence, possibly a masking one.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
The shift towards a knowledge-based economy has inevitably prompted the evolution of patent exploitation. Nowadays, patent is more than just a prevention tool for a company to block its competitors from developing rival technologies, but lies at the very heart of its strategy for value creation and is therefore strategically exploited for economic pro t and competitive advantage. Along with the evolution of patent exploitation, the demand for reliable and systematic patent valuation has also reached an unprecedented level. However, most of the quantitative approaches in use to assess patent could arguably fall into four categories and they are based solely on the conventional discounted cash flow analysis, whose usability and reliability in the context of patent valuation are greatly limited by five practical issues: the market illiquidity, the poor data availability, discriminatory cash-flow estimations, and its incapability to account for changing risk and managerial flexibility. This dissertation attempts to overcome these impeding barriers by rationalizing the use of two techniques, namely fuzzy set theory (aiming at the first three issues) and real option analysis (aiming at the last two). It commences with an investigation into the nature of the uncertainties inherent in patent cash flow estimation and claims that two levels of uncertainties must be properly accounted for. Further investigation reveals that both levels of uncertainties fall under the categorization of subjective uncertainty, which differs from objective uncertainty originating from inherent randomness in that uncertainties labelled as subjective are highly related to the behavioural aspects of decision making and are usually witnessed whenever human judgement, evaluation or reasoning is crucial to the system under consideration and there exists a lack of complete knowledge on its variables. Having clarified their nature, the application of fuzzy set theory in modelling patent-related uncertain quantities is effortlessly justified. The application of real option analysis to patent valuation is prompted by the fact that both patent application process and the subsequent patent exploitation (or commercialization) are subject to a wide range of decisions at multiple successive stages. In other words, both patent applicants and patentees are faced with a large variety of courses of action as to how their patent applications and granted patents can be managed. Since they have the right to run their projects actively, this flexibility has value and thus must be properly accounted for. Accordingly, an explicit identification of the types of managerial flexibility inherent in patent-related decision making problems and in patent valuation, and a discussion on how they could be interpreted in terms of real options are provided in this dissertation. Additionally, the use of the proposed techniques in practical applications is demonstrated by three fuzzy real option analysis based models. In particular, the pay-of method and the extended fuzzy Black-Scholes model are employed to investigate the profitability of a patent application project for a new process for the preparation of a gypsum-fibre composite and to justify the subsequent patent commercialization decision, respectively; a fuzzy binomial model is designed to reveal the economic potential of a patent licensing opportunity.
Resumo:
Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.