794 resultados para Scalable Video Codec
Resumo:
In recent years, the worldwide distribution of smartphone devices has been growing rapidly. Mobile technologies are evolving fast, a situation which provides new possibilities for mobile learning applications. Along with new delivery methods, this development enables new concepts for learning. This study focuses on the effectiveness and experience of a mobile learning video promoting the key features of a specific device. Through relevant learning theories, mobile technologies and empirical findings, the thesis presents the key elements for a mobile learning video that are essential for effective learning. This study also explores how previous experience with mobile services and knowledge of a mobile handset relate to final learning results. Moreover, this study discusses the optimal delivery mechanisms for a mobile video. The target group for the study consists of twenty employees of a Sanoma Company. The main findings show that the individual experience of learning and the actual learning results may differ and that the design for certain video elements, such as sound and the presentation of technical features, can have an impact on the experience and effectiveness of a mobile learning video. Moreover, a video delivery method based on cloud technologies and HTML5 is suggested to be used in parallel with standalone applications.
Resumo:
Objective: To evaluate perioperative outcomes, safety and feasibility of video-assisted resection for primary and secondary liver lesions. Methods : From a prospective database, we analyzed the perioperative results (up to 90 days) of 25 consecutive patients undergoing video-assisted resections in the period between June 2007 and June 2013. Results : The mean age was 53.4 years (23-73) and 16 (64%) patients were female. Of the total, 84% were suffering from malignant diseases. We performed 33 resections (1 to 4 nodules per patient). The procedures performed were non-anatomical resections (n = 26), segmentectomy (n = 1), 2/3 bisegmentectomy (n = 1), 6/7 bisegmentectomy (n = 1), left hepatectomy (n = 2) and right hepatectomy (n = 2). The procedures contemplated postero-superior segments in 66.7%, requiring multiple or larger resections. The average operating time was 226 minutes (80-420), and anesthesia time, 360 minutes (200-630). The average size of resected nodes was 3.2 cm (0.8 to 10) and the surgical margins were free in all the analyzed specimens. Eight percent of patients needed blood transfusion and no case was converted to open surgery. The length of stay was 6.5 days (3-16). Postoperative complications occurred in 20% of patients, with no perioperative mortality. Conclusion : The video-assisted liver resection is feasible and safe and should be part of the liver surgeon armamentarium for resection of primary and secondary liver lesions.
Resumo:
Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.
Resumo:
The usage of digital content, such as video clips and images, has increased dramatically during the last decade. Local image features have been applied increasingly in various image and video retrieval applications. This thesis evaluates local features and applies them to image and video processing tasks. The results of the study show that 1) the performance of different local feature detector and descriptor methods vary significantly in object class matching, 2) local features can be applied in image alignment with superior results against the state-of-the-art, 3) the local feature based shot boundary detection method produces promising results, and 4) the local feature based hierarchical video summarization method shows promising new new research direction. In conclusion, this thesis presents the local features as a powerful tool in many applications and the imminent future work should concentrate on improving the quality of the local features.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
The thesis studies the role of video based content marketing as a part of modern marketing communications.
Resumo:
Tässä fenomenologisessa tutkimuksessa kuvaillaan Video-EEG –tutkimukseen (VEEG) tulevien potilaiden kokemuksia kohtauksistaan. Tutkimusasetelmana on käytetty fenomenologiseen psykologiaan kuuluvaa Giorgin menetelmää soveltaen sitä hoitotieteen tutkimukseen. Tutkimuksen tarkoituksena oli kuvailla neurologisten kohtausoireiden vuoksi VEEG-tutkimukseen tulleiden potilaiden kokemuksia kohtauksistaan ja tunnistaa sekä kuvailla kokemukseen liittyviä tekijöitä. Tutkimuksen tavoitteena oli lisäta terveydenhoitohenkilökunnan ymmärrystä neurologisia kohtausoireita saavien ihmisten ohjaustarpeista. Materiaali kerättiin kahdeksalta potilaalta avoimilla haastatteluilla ja analysoitiin Giorgin analyysimenetelmällä. Aineistoon yhdistettiin kliinisen neurofysiologin lausunto ja muodostettiin kokemuskertomukset. Aineistosta tunnistettiin fenomenologista reduktiota käyttäen keskeiset kohtauksiin ja sairauteen liittyvät kokemukset. Käsitteiden suhdetta toisiinsa ja merkitystä sopeutumiselle analysoitiin käyttäen apuna Uncertainty in illness -mallia. Keskeisten kokemusten pohjalta toteutettiin kirjallisuushaku, jonka tuloksia reflektoitiin tämän tutkimuksen tuloksiin. Aineistosta muodostui kolme erillistä kokemuskertomusta: kertomus konkreettisista tapahtumista, kokemus hallinnan menettämisestä ja kokemus sairauden kanssa elämisesta. Keskeisiksi kokemussisällöiksi tunnistettiin kokemus terveysongelman hallinnasta, kokemus hallinnan menettämisestä, kokemus ympäristön negatiivisesta suhtautumisesta ja huoli läheisistä. Aikaisempaa tutkimusta löytyi kokemuksista terveysongelman hallinnasta ja hallinnan menetyksestä sekä ympäristön suhtautumisesta.
Resumo:
Tässä fenomenologisessa tutkimuksessa kuvaillaan Video-EEG –tutkimukseen (VEEG) tulevien potilaiden kokemuksia kohtauksistaan. Tutkimusasetelmana on käytetty fenomenologiseen psykologiaan kuuluvaa Giorgin menetelmää soveltaen sitä hoitotieteen tutkimukseen. Tutkimuksen tarkoituksena oli kuvailla neurologisten kohtausoireiden vuoksi VEEGtutkimukseen tulleiden potilaiden kokemuksia kohtauksistaan ja tunnistaa sekä kuvailla kokemukseen liittyviä tekijöitä. Tutkimuksen tavoitteena oli lisätä terveydenhoitohenkilökunnan ymmärrystä neurologisia kohtausoireita saavien ihmisten ohjaustarpeista. Materiaali kerättiin kahdeksalta potilaalta avoimilla haastatteluilla ja analysoitiin Giorgin analyysimenetelmällä. Aineistoon yhdistettiin kliinisen neurofysiologin lausunto ja muodostettiin kokemuskertomukset. Aineistosta tunnistettiin fenomenologista reduktiota käyttäen keskeiset kohtauksiin ja sairauteen liittyvät kokemukset. Käsitteiden suhdetta toisiinsa ja merkitystä sopeutumiselle analysoitiin käyttäen apuna Uncertainty in illness -mallia. Keskeisten kokemusten pohjalta toteutettiin kirjallisuushaku, jonka tuloksia reflektoitiin tämän tutkimuksen tuloksiin. Aineistosta muodostui kolme erillistä kokemuskertomusta: kertomus konkreettisista tapahtumista, kokemus hallinnan menettämisestä ja kokemus sairauden kanssa elämisestä. Keskeisiksi kokemussisällöiksi tunnistettiin kokemus terveysongelman hallinnasta, kokemus hallinnan menettämisestä, kokemus ympäristön negatiivisesta suhtautumisesta ja huoli läheisistä. Aikaisempaa tutkimusta löytyi kokemuksista terveysongelman hallinnasta ja hallinnan menetyksestä sekä ympäristön suhtautumisesta.
Resumo:
The problem of automatic recognition of the fish from the video sequences is discussed in this Master’s Thesis. This is a very urgent issue for many organizations engaged in fish farming in Finland and Russia because the process of automation control and counting of individual species is turning point in the industry. The difficulties and the specific features of the problem have been identified in order to find a solution and propose some recommendations for the components of the automated fish recognition system. Methods such as background subtraction, Kalman filtering and Viola-Jones method were implemented during this work for detection, tracking and estimation of fish parameters. Both the results of the experiments and the choice of the appropriate methods strongly depend on the quality and the type of a video which is used as an input data. Practical experiments have demonstrated that not all methods can produce good results for real data, whereas on synthetic data they operate satisfactorily.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.
Resumo:
This thesis regards exhaustion of copyright’s distribution right in intangible transfers of video games. It analyses whether, under the current law of the European Union, the phenomenon of digital exhaustion, especially in relation to games exists. The thesis analyses the consumers’ position in the market for copyright protected goods. It uses video games market as an example of the wider phenomenon of the effect of latest technological developments on consumers. The research conducted for the thesis is mostly legal dogmatic, although also comparative analysis, law and economics and law and technology methods are utilised. The thesis evaluates the effects of the most recent case law of the European Court of Justice to analyse the current state of digital exhaustion. In the analysis of effects that the existence of digital exhaustion has, the thesis uses the consumers’ point of view. The thesis introduces the current state of technology in the field of video games from a legal perspective. Furthermore the thesis analyses the effects on consumers of a scenario that no digital exhaustion exists in the future. Such scenario under the recent European case law at the moment seems realistic. The conclusion of my research is most importantly that the consumer position in the market for digital goods has deteriorated and that the probable exclusion of the exhaustion for digital goods is another piece of evidence of this development. Most importantly however, the state of affairs where no certainty prevails on whether digital exhaustion exists, creates injustice from the consumers’ point of view. Accordingly, acts by EU legislators of the Court of Justice of the European Union are required to clarify the issue.