31 resultados para streaming SIMD extensions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electronic and mechanical media such as film, television, photography, offset, are just examples of how fast and important the technological development had become in society. Nevertheless the outcoming technologies and the continuous development had provided newer and better possibilities every time for having advanced services. Nowadays multi-view video has been developed with different tools and applications, having as main goal to be more innovative and bring within technical offerings in a friendly for all users in general, in terms of managing and accessibility (just internet connection is needed). The intention of all technologies is to generate an innovation in order to gain more users and start being popular, therefore is important to realize an implementation in this case. In such terms realizing about the outreach that Multi View Video, an importance to become more global in this days, an application that supports this aim such as the possibility of language selection within the use of a same scenario has been realized. Finally is important to point out that thanks to the Multi View Video's continuous progress in technology a more intercultural market will be reachable, making of it a shared society growth on the world's global development. � ��� ���� ������� ��� �� ��� ��� �������� ��� ���� ��� ��� ������ ���������� � ���� � �� ���� ���� � ���� �� � � ���� � � ��� ��� �� ��� �� � ��� ��� ��������� �� � ����� ��������� ��� � ��� � ���� ���� ����� ����������� ��� ��� �� � ������������� �� �������� �������� ������� ������� �� ����� �������� ��� � � �� ���� �������� ���� ����� �������� �������� �� ������ ���� �� � ����������� ������������� � � ��!��� � � � �� ������� ��� ��������"������ � �� ���������� �������� ��� �� ������ � ����� ����� ��� ��� �� � �� �� ���� �� ��� �� ���� � � � �� ��� ������ �� �� ��� �� �� ��� �� � �� ��� #�� ��� ������� � ��� �� � �� ������$������� � ��� ��� # ������� � ����� ����� �� ���� �% ���% �������� ��� ����� ����������� �� ������� �� � �� ������ ��� ���� �� ��� �� � ����� �� � �� � �� ����� ��� ��� ���� � � �� ��� ��������� ����� ��� � � �� ���������������������� ����������� ��� #����& ������ �� ��� �� � ���� � ��� � �� � ���'�� �� ��� ��� � % ��� % ���(�� ��� ������ � �� ���� �� ���������� ���� �� � � ��� � ����� '� �� ��� ��� ���������� ��' ������ ������ ������ � ��� �� ����� ����� ��(������������������� ��� � �

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today P2P faces two important challenges: design of mechanisms to encourage users' collaboration in multimedia live streaming services; design of reliable algorithms with QoS provision, to encourage the multimedia providers employ the P2P topology in commercial live streaming systems. We believe that these two challenges are tightly-related and there is much to be done with respect. This paper analyzes the effect of user behavior in a multi-tree P2P overlay and describes a business model based on monetary discount as incentive in a P2P-Cloud multimedia streaming system. We believe a discount model can boost up users' cooperation and loyalty and enhance the overall system integrity and performance. Moreover the model bounds the constraints for a provider's revenue and cost if the P2P system is leveraged on a cloud infrastructure. Our case study shows that a streaming system provider can establish or adapt his business model by applying the described bounds to achieve a good discount-revenue trade-off and promote the system to the users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today P2P faces two important challenges: design of mechanisms to encourage users’ collaboration in multimedia live streaming services; design of reliable algorithms with QoS provision, to encourage multimedia providers employ the P2P topology in commercial streaming services. We believe that these two challenges are tightly-related and there is much to be done with respect. This paper proposes a novel monetary incentive for P2P multimedia streaming. The incentive model classifies the users in groups according to the perceived video quality. We apply the model to a streaming system’s billing model in order to evaluate its feasibility and visualize its quantitative effect on the users’ motivation and the provider’s profit. We conclude that monetary incentive can boost up users’ cooperation, loyalty and enhance the overall system integrity and performance. Moreover the model defines the constraints for the provider’s cost and profit when the system is leveraged on the cloud. Considering those constraints, a multimedia content provider can adapt the billing model of his streaming service and achieve desirable discount-profit trade-off. This will moreover contribute to better promotion of the service, across the users on the Internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two complementary benchmarks have been proposed so far for the evaluation and continuous improvement of RDF stream processors: SRBench and LSBench. They put a special focus on different features of the evaluated systems, including coverage of the streaming extensions of SPARQL supported by each processor, query processing throughput, and an early analysis of query evaluation correctness, based on comparing the results obtained by different processors for a set of queries. However, none of them has analysed the operational semantics of these processors in order to assess the correctness of query evaluation results. In this paper, we propose a characterization of the operational semantics of RDF stream processors, adapting well-known models used in the stream processing engine community: CQL and SECRET. Through this formalization, we address correctness in RDF stream processor benchmarks, allowing to determine the multiple answers that systems should provide. Finally, we present CSRBench, an extension of SRBench to address query result correctness verification using an automatic method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an adaptive unequal error protection (UEP) strategy built on the 1-D interleaved parity Application Layer Forward Error Correction (AL-FEC) code for protecting the transmission of stereoscopic 3D video content encoded with Multiview Video Coding (MVC) through IP-based networks. Our scheme targets the minimization of quality degradation produced by packet losses during video transmission in time-sensitive application scenarios. To that end, based on a novel packet-level distortion model, it selects in real time the most suitable packets within each Group of Pictures (GOP) to be protected and the most convenient FEC technique parameters, i.e., the size of the FEC generator matrix. In order to make these decisions, it considers the relevance of the packet, the behavior of the channel, and the available bitrate for protection purposes. Simulation results validate both the distortion model introduced to estimate the importance of packets and the optimization of the FEC technique parameter values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although the delivery of 3D video services to households is nowadays a reality thanks to frame-compatible formats, many efforts are being made to obtain efficient methods to transmit 3D content offering a high quality of experience to the end users. In this paper, a stereoscopic video streaming scenario is considered and the perceptual impact of various strategies applicable to adaptive streaming situations are compared. Specifically, the mechanisms are based on switching between copies of the content with different coding qualities, on discarding frames of the sequence, on switching from 3D to 2D and on using asymmetric coding of the stereo views. In addition, when video freezes happen, the possibility of keeping the end-to-end latency or maintaining the continuity of the video are considered. These aspects were evaluated carrying out a subjective assessment test considering also visual discomfort issues using a methodology designed to keep as far as possible domestic viewing conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

UML is widely accepted as the standard for representing the various software artifacts generated by a development process. For this reason, there have been attempts to use this language to represent the software architecture of systems as well. Unfortunately, these attempts have ended in the same representations (boxes and lines) already criticized by the software architecture community.In this work we propose an extension to the UML metamodel that is able to represent the syntactics and semantics of the C3 architectural style. This style is derived from C2. The modifications to define C3 are described in section 4. This proposal is innovative regarding UML extensions for software architectures, since previous proposals where based on light extensions to the UML meta-model, while we propose a heavyweight extension of the metamodel. On the other hand, this proposal is less ambitious than previous proposals, since we do not want to represent in UML any architectural style, but only one: C3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the recent increased popularity and high usage of HTTP Adaptive Streaming (HAS) techniques, various studies have been carried out in this area which generally focused on the technical enhancement of HAS technology and applications. However, a lack of common HAS standard led to multiple proprietary approaches which have been developed by major Internet companies. In the emerging MPEG-DASH standard the packagings of the video content and HTTP syntax have been standardized; but all the details of the adaptation behavior are left to the client implementation. Nevertheless, to design an adaptation algorithm which optimizes the viewing experience of the enduser, the multimedia service providers need to know about the Quality of Experience (QoE) of different adaptation schemes. Taking this into account, the objective of this experiment was to study the QoE of a HAS-based video broadcast model. The experiment has been carried out through a subjective study of the end user response to various possible clients’ behavior for changing the video quality taking different QoE-influence factors into account. The experimental conclusions have made a good insight into the QoE of different adaptation schemes which can be exploited by HAS clients for designing the adaptation algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In contrast to traditional push-based protocols, adaptive streaming techniques like Dynamic Adaptive Streaming over HTTP (DASH) fix attention on the client, who dynamically requests different-quality portions of the content to cope with a limited and variable bandwidth but aiming at maximizing the quality perceived by the user. Since DASH adaptation logic at the client is not covered by the standard, we propose a solution based on Stochastic Dynamic Programming (SDP) techniques to find the optimal request policies that guarantee the users' Quality of Experience (QoE). Our algorithm is evaluated in a simulated streaming session and is compared with other adaptation approaches. The results show that our proposal outperforms them in terms of QoE, requesting higher qualities on average.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

HTTP adaptive streaming technology has become widely spread in multimedia services because of its ability to provide adaptation to characteristics of various viewing devices and dynamic network conditions. There are various studies targeting the optimization of adaptation strategy. However, in order to provide an optimal viewing experience to the end-user, it is crucial to get knowledge about the Quality of Experience (QoE) of different adaptation schemes. This paper overviews the state of the art concerning subjective evaluation of adaptive streaming QoE and highlights the challenges and open research questions related to QoE assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to cater for user's quality of experience (QoE) requirements, HTTP adaptive streaming (HAS) based solutions of video services have become popular recently. User QoE feedback can be instrumental in improving the capabilities of such services. Perceptual quality experiments that involve humans are considered to be the most valid method of the assessment of QoE. Besides lab-based subjective experiments, crowdsourcing based subjective assessment of video quality is gaining popularity as an alternative method. This paper presents insights into a study that investigates perceptual preferences of various adaptive video streaming scenarios through crowdsourcing based subjective quality assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, HTTP adaptive streaming (HAS) has become a reliable distribution technology offering significant advantages in terms of both user perceived Quality of Experience (QoE) and resource utilization for content and network service providers. By trading-off the video quality, HAS is able to adapt to the available bandwidth and display requirements so that it can deliver the video content to a variety of devices over the Internet. However, until now there is not enough knowledge of how the adaptation techniques affect the end user's visual experience. Therefore, this paper presents a comparative analysis of different bitrate adaptation strategies in adaptive streaming of monoscopic and stereoscopic video. This has been done through a subjective experiment of testing the end-user response to the video quality variations, considering the visual comfort issue. The experimental outcomes have made a good insight into the factors that can influence on the QoE of different adaptation strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing popularity of adaptive streaming-based video delivery nowadays has raised the interest about the user's perception when experiencing quality adaptation. The impact of the video content characteristics on user's perceptual quality has already become evident. The aim of this study is to investigate the influence of this factor on the quality of experience of adaptive streaming scenarios. Our results show that the perceptual quality of adaptation strategies applied on videos with high spatial and low temporal amount of activity is significantly lower compared to the other content types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Actualmente la optimization de la calidad de experiencia (Quality of Experience- QoE) de HTTP Adaptive Streaming (HAS) de video recibe una atención creciente. Este incremento de interés proviene fundamentalmente de las carencias de las soluciones actuales HAS, que, al no ser QoE-driven, no incluyen la percepción de la calidad de los usuarios finales como una parte integral de la lógica de adaptación. Por lo tanto, la obtención de información de referencia fiable en QoE en HAS presenta retos importantes, ya que las metodologías de evaluación subjetiva de la calidad de vídeo propuestas en las normas actuales no son adecuadas para tratar con la variación temporal de la calidad que es consustancial de HAS. Esta tesis investiga la influencia de la adaptación dinámica en la calidad de la transmisión de vídeo considerando métodos de evaluación subjetiva. Tras un estudio exhaustivo del estado del arte en la evaluación subjetiva de QoE en HAS, se han resaltado los retos asociados y las líneas de investigación abiertas. Como resultado, se han seleccionado dos líneas principales de investigación: el análisis del impacto en la QoE de los parámetros de las técnicas de adaptación y la investigación de las metodologías de prueba subjetiva adecuada para evaluación de QoE en HAS. Se han llevado a cabo un conjunto de experimentos de laboratorio para investigar las cuestiones planteadas mediante la utilización de diferentes metodologáas para pruebas subjetivas. El análisis estadístico muestra que no son robustas todas las suposiciones y reivindicaciones de las referencias analizadas, en particular en lo que respecta al impacto en la QoE de la frecuencia de las variaciones de calidad, de las adaptaciones suaves o abruptas y de las oscilaciones de calidad. Por otra parte, nuestros resultados confirman la influencia de otros parámetros, como la longitud de los segmentos de vídeo y la amplitud de las oscilaciones de calidad. Los resultados también muestran que tomar en consideración las características objetivas de los contenidos puede ser beneficioso para la mejora de la QoE en HAS. Además, todos los resultados han sido validados mediante extensos análisis experimentales que han incluido estudio tanto en otros laboratorios como en crowdsourcing Por último, sobre los aspectos metodológicos de las pruebas subjetivas de QoE, se ha realizado la comparación entre los resultados experimentales obtenidos a partir de un método estandarizado basado en estímulos cortos (ACR) y un método semi continuo (desarrollado para la evaluación de secuencias prolongadas de vídeo). A pesar de algunas diferencias, el resultado de los análisis estadísticos no muestra ningún efecto significativo de la metodología de prueba. Asimismo, aunque se percibe la influencia de la presencia de audio en la evaluación de degradaciones del vídeo, no se han encontrado efectos estadísticamente significativos de dicha presencia. A partir de la ausencia de influencia del método de prueba y de la presencia de audio, se ha realizado un análisis adicional sobre el impacto de realizar comparaciones estadísticas múltiples en niveles estadísticos de importancia que aumentan la probabilidad de los errores de tipo-I (falsos positivos). Nuestros resultados muestran que, para obtener un efectos sólido en el análisis estadístico de los resultados subjetivos, es necesario aumentar el número de sujetos de las pruebas claramente por encima de los tamaños de muestras propuestos por las normas y recomendaciones actuales. ABSTRACT Optimizing the Quality of Experience (QoE) of HTTP adaptive video streaming (HAS) is receiving increasing attention nowadays. The growth of interest is mainly caused by the fact that current HAS solutions are not QoE-driven, i.e. end-user quality perception is not integral part of the adaptation logic. However, obtaining the necessary reliable ground truths on HAS QoE faces substantial challenges, since the subjective video quality assessment methodologies as proposed by current standards are not well-suited for dealing with the time-varying quality properties that are characteristic for HAS. This thesis investigates the influence of dynamic quality adaptation on the QoE of streaming video by means of subjective evaluation approaches. Based on a comprehensive survey of related work on subjective HAS QoE assessment, the related challenges and open research questions are highlighted and discussed. As a result, two main research directions are selected for further investigation: analysis of the QoE impact of different technical adaptation parameters, and investigation of testing methodologies suitable for HAS QoE evaluation. In order to investigate related research issues and questions, a set of laboratory experiments have been conducted using different subjective testing methodologies. Our statistical analysis demonstrates that not all assumptions and claims reported in the literature are robust, particularly as regards the QoE impact of switching frequency, smooth vs. abrupt switching, and quality oscillation. On the other hand, our results confirm the influence of some other parameters such as chunk length and switching amplitude on perceived quality. We also show that taking the objective characteristics of the content into account can be beneficial to improve the adaptation viewing experience. In addition, all aforementioned findings are validated by means of an extensive cross-experimental analysis that involves external laboratory and crowdsourcing studies. Finally, to address the methodological aspects of subjective QoE testing, a comparison between the experimental results obtained from a (short stimuli-based) ACR standardized method and a semi-continuous method (developed for assessment of long video sequences) has been performed. In spite of observation of some differences, the result of statistical analysis does not show any significant effect of testing methodology. Similarly, although the influence of audio presence on evaluation of video-related degradations is perceived, no statistically significant effect of audio presence could be found. Motivating by this finding (no effect of testing method and audio presence), a subsequent analysis has been performed investigating the impact of performing multiple statistical comparisons on statistical levels of significance which increase the likelihood of Type-I errors (false positives). Our results show that in order to obtain a strong effect from the statistical analysis of the subjective results, it is necessary to increase the number of test subjects well beyond the sample sizes proposed by current quality assessment standards and recommendations.