913 resultados para user-defined function (UDF)
Resumo:
ACM Computing Classification System (1998): D.2.11, D.1.3, D.3.1, J.3, C.2.4.
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
Graph Reduction Machines, are a traditional technique for implementing functional programming languages. They allow to run programs by transforming graphs by the successive application of reduction rules. Web service composition enables the creation of new web services from existing ones. BPEL is a workflow-based language for creating web service compositions. It is also the industrial and academic standard for this kind of languages. As it is designed to compose web services, the use of BPEL in a scenario where multiple technologies need to be used is problematic: when operations other than web services need to be performed to implement the business logic of a company, part of the work is done on an ad hoc basis. To allow heterogeneous operations to be part of the same workflow, may help to improve the implementation of business processes in a principled way. This work uses a simple variation of the BPEL language for creating compositions containing not only web service operations but also big data tasks or user-defined operations. We define an extensible graph reduction machine that allows the evaluation of BPEL programs and implement this machine as proof of concept. We present some experimental results.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Field-programmable gate arrays are ideal hosts to custom accelerators for signal, image, and data processing but de- mand manual register transfer level design if high performance and low cost are desired. High-level synthesis reduces this design burden but requires manual design of complex on-chip and off-chip memory architectures, a major limitation in applications such as video processing. This paper presents an approach to resolve this shortcoming. A constructive process is described that can derive such accelerators, including on- and off-chip memory storage from a C description such that a user-defined throughput constraint is met. By employing a novel statement-oriented approach, dataflow intermediate models are derived and used to support simple ap- proaches for on-/off-chip buffer partitioning, derivation of custom on-chip memory hierarchies and architecture transformation to ensure user-defined throughput constraints are met with minimum cost. When applied to accelerators for full search motion estima- tion, matrix multiplication, Sobel edge detection, and fast Fourier transform, it is shown how real-time performance up to an order of magnitude in advance of existing commercial HLS tools is enabled whilst including all requisite memory infrastructure. Further, op- timizations are presented that reduce the on-chip buffer capacity and physical resource cost by up to 96% and 75%, respectively, whilst maintaining real-time performance.
Resumo:
The present document deals with the optimization of shape of aerodynamic profiles -- The objective is to reduce the drag coefficient on a given profile without penalising the lift coefficient -- A set of control points defining the geometry are passed and parameterized as a B-Spline curve -- These points are modified automatically by means of CFD analysis -- A given shape is defined by an user and a valid volumetric CFD domain is constructed from this planar data and a set of user-defined parameters -- The construction process involves the usage of 2D and 3D meshing algorithms that were coupled into own- code -- The volume of air surrounding the airfoil and mesh quality are also parametrically defined -- Some standard NACA profiles were used by obtaining first its control points in order to test the algorithm -- Navier-Stokes equations were solved for turbulent, steady-state ow of compressible uids using the k-epsilon model and SIMPLE algorithm -- In order to obtain data for the optimization process an utility to extract drag and lift data from the CFD simulation was added -- After a simulation is run drag and lift data are passed to the optimization process -- A gradient-based method using the steepest descent was implemented in order to define the magnitude and direction of the displacement of each control point -- The control points and other parameters defined as the design variables are iteratively modified in order to achieve an optimum -- Preliminary results on conceptual examples show a decrease in drag and a change in geometry that obeys to aerodynamic behavior principles
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
The ocean bottom pressure records from eight stations of the Cascadia array are used to investigate the properties of short surface gravity waves with frequencies ranging from 0.2 to 5 Hz. It is found that the pressure spectrum at all sites is a well-defined function of the wind speed U10 and frequency f, with only a minor shift of a few dB from one site to another that can be attributed to variations in bottom properties. This observation can be combined with the theoretical prediction that the ocean bottom pressure spectrum is proportional to the surface gravity wave spectrum E(f) squared, times the overlap integral I(f) which is given by the directional wave spectrum at each frequency. This combination, using E(f) estimated from modeled spectra or parametric spectra, yields an overlap integral I(f) that is a function of the local wave age inline image. This function is maximum for f∕fPM = 8 and decreases by 10 dB for f∕fPM = 2 and f∕fPM = 30. This shape of I(f) can be interpreted as a maximum width of the directional wave spectrum at f∕fPM = 8, possibly equivalent to an isotropic directional spectrum, and a narrower directional distribution toward both the dominant low frequencies and the higher capillary-gravity wave frequencies.
Resumo:
A subfilter-scale (SFS) stress model is developed for large-eddy simulations (LES) and is tested on various benchmark problems in both wall-resolved and wall-modelled LES. The basic ingredients of the proposed model are the model length-scale, and the model parameter. The model length-scale is defined as a fraction of the integral scale of the flow, decoupled from the grid. The portion of the resolved scales (LES resolution) appears as a user-defined model parameter, an advantage that the user decides the LES resolution. The model parameter is determined based on a measure of LES resolution, the SFS activity. The user decides a value for the SFS activity (based on the affordable computational budget and expected accuracy), and the model parameter is calculated dynamically. Depending on how the SFS activity is enforced, two SFS models are proposed. In one approach the user assigns the global (volume averaged) contribution of SFS to the transport (global model), while in the second model (local model), SFS activity is decided locally (locally averaged). The models are tested on isotropic turbulence, channel flow, backward-facing step and separating boundary layer. In wall-resolved LES, both global and local models perform quite accurately. Due to their near-wall behaviour, they result in accurate prediction of the flow on coarse grids. The backward-facing step also highlights the advantage of decoupling the model length-scale from the mesh. Despite the sharply refined grid near the step, the proposed SFS models yield a smooth, while physically consistent filter-width distribution, which minimizes errors when grid discontinuity is present. Finally the model application is extended to wall-modelled LES and is tested on channel flow and separating boundary layer. Given the coarse resolution used in wall-modelled LES, near the wall most of the eddies become SFS and SFS activity is required to be locally increased. The results are in very good agreement with the data for the channel. Errors in the prediction of separation and reattachment are observed in the separated flow, that are somewhat improved with some modifications to the wall-layer model.
Resumo:
Background: Intensified selection of polled individuals has recently gained importance in predominantly horned dairy cattle breeds as an alternative to routine dehorning. The status quo of the current polled breeding pool of genetically-closely related artificial insemination sires with lower breeding values for performance traits raises questions regarding the effects of intensified selection based on this founder pool. Methods: We developed a stochastic simulation framework that combines the stochastic simulation software QMSim and a self-designed R program named QUALsim that acts as an external extension. Two traits were simulated in a dairy cattle population for 25 generations: one quantitative (QMSim) and one qualitative trait with Mendelian inheritance (i.e. polledness, QUALsim). The assignment scheme for qualitative trait genotypes initiated realistic initial breeding situations regarding allele frequencies, true breeding values for the quantitative trait and genetic relatedness. Intensified selection for polled cattle was achieved using an approach that weights estimated breeding values in the animal best linear unbiased prediction model for the quantitative trait depending on genotypes or phenotypes for the polled trait with a user-defined weighting factor. Results: Selection response for the polled trait was highest in the selection scheme based on genotypes. Selection based on phenotypes led to significantly lower allele frequencies for polled. The male selection path played a significantly greater role for a fast dissemination of polled alleles compared to female selection strategies. Fixation of the polled allele implies selection based on polled genotypes among males. In comparison to a base breeding scenario that does not take polledness into account, intensive selection for polled substantially reduced genetic gain for this quantitative trait after 25 generations. Reducing selection intensity for polled males while maintaining strong selection intensity among females, simultaneously decreased losses in genetic gain and achieved a final allele frequency of 0.93 for polled. Conclusions: A fast transition to a completely polled population through intensified selection for polled was in contradiction to the preservation of high genetic gain for the quantitative trait. Selection on male polled genotypes with moderate weighting, and selection on female polled phenotypes with high weighting, could be a suitable compromise regarding all important breeding aspects.
Resumo:
Este artículo describe la puesta en funcionamiento de una herramienta de información geográfica para la gestión y planificación de recursos hídricos de Cataluña desarrollada mediante plataformas OpenSource. Esta herramienta ha de permitir responder a sucesos extremos como la sequía, facilitando de manera intuitiva y rápida elementos de evaluación y toma de decisiones. Este Sistema de Información Geográfica (SIG) de gestión de los recursos hídricos se ha desarrollado para obtener resultados a medida del cliente. Su interfaz ágil y sencilla, su capacidad multiusuario, su alto rendimiento y escalabilidad y la ausencia de costes de licencia hacen que, con una inversión limitada, se obtenga una amortización muy rápida. Cabe destacar la automatización de procesos sistemáticos, geoprocesos y análisis multicriterio definidos por el cliente, que le permiten ahorrar tiempo y recursos, así como aumentar la productividad.Palabras clave: Sistema de Información Geográfica (SIG), acceso abierto, gestión, agua, automatizaciónAbstractThis article describes the implementation of a geographical information tool developed on an OpenSource platform for the management and planning of water resources in Catalonia. This Geographic Information System (GIS) is designed to deliver fast and intuitive evaluation and decision making criteria in response to extreme events, such as drought. Its strong customization, user friendliness, multiuser capability, performance and scalability, together with its license-free condition, allow for an extremely fast return on investment. The embedded automation of user-defined systemic processes, geo-processes and multi-criteria analyses provide significant time and resource savings and productivity Key Words: Geographic Information System (GIS), Open Source, water supply management, automation
Resumo:
High-resolution esophageal manometry (HRM) is a recent development used in the evaluation of esophageal function. Our aim was to assess the inter-observer agreement for diagnosis of esophageal motility disorders using this technology. Practitioners registered on the HRM Working Group website were invited to review and classify (i) 147 individual water swallows and (ii) 40 diagnostic studies comprising 10 swallows using a drop-down menu that followed the Chicago Classification system. Data were presented using a standardized format with pressure contours without a summary of HRM metrics. The sequence of swallows was fixed for each user but randomized between users to avoid sequence bias. Participants were blinded to other entries. (i) Individual swallows were assessed by 18 practitioners (13 institutions). Consensus agreement (≤2/18 dissenters) was present for most cases of normal peristalsis and achalasia but not for cases of peristaltic dysmotility. (ii) Diagnostic studies were assessed by 36 practitioners (28 institutions). Overall inter-observer agreement was 'moderate' (kappa 0.51) being 'substantial' (kappa > 0.7) for achalasia type I/II and no lower than 'fair-moderate' (kappa >0.34) for any diagnosis. Overall agreement was somewhat higher among those that had performed >400 studies (n = 9; kappa 0.55) and 'substantial' among experts involved in development of the Chicago Classification system (n = 4; kappa 0.66). This prospective, randomized, and blinded study reports an acceptable level of inter-observer agreement for HRM diagnoses across the full spectrum of esophageal motility disorders for a large group of clinicians working in a range of medical institutions. Suboptimal agreement for diagnosis of peristaltic motility disorders highlights contribution of objective HRM metrics.
Resumo:
Assessing users’ benefit in a transport policy implementation has been studied by many researchers using theoretical or empirical measures. However, few of them measure users’ benefit in a different way from the consumer surplus. Therefore, this paper aims to assess a new measure of user benefits by weighting consumer surplus in order to include equity assessment for different transport policies simulated in a dynamic middle-term LUTI model adapted to the case study of Madrid. Three different transport policies, including road pricing, parking charge and public transport improvement have been simulated through the Metropolitan Activity Relocation Simulator, MARS, the LUTI calibrated model for Madrid). A social welfare function (WF) is defined using a cost benefit analysis function that includes mainly costs and benefits of users and operators of the transport system. Particularly, the part of welfare function concerning the users, (i.e. consumer surplus), is modified by a compensating weight (CW) which represents the inverse of household income level. Based on the modified social welfare function, the effects on the measure of users benefits are estimated and compared with the old WF ́s results as well. The result of the analysis shows that road pricing leads a negative effect on the users benefits specially on the low income users. Actually, the road pricing and parking charge implementation results like a regressive policy especially at long term. Public transport improvement scenario brings more positive effects on low income user benefits. The integrated (road pricing and increasing public services) policy scenario is the one which receive the most user benefits. The results of this research could be a key issue to understanding the relationship between transport systems policies and user benefits distribution in a metropolitan context.
Resumo:
Estudios recientes promueven la integración de estímulos multisensoriales en activos multimedia con el fin de mejorar la experiencia de usuario mediante la estimulación de nuevos sentidos, más allá de la tradicional experiencia audiovisual. Del mismo modo, varios trabajos proponen la introducción de componentes de interacción capaces de complementar con nuevas características, funcionalidades y/o información la experiencia multimedia. Efectos sensoriales basados en el uso de nuevas técnicas de audio, olores, viento, vibraciones y control de la iluminación, han demostrado tener un impacto favorable en la sensación de Presencia, en el disfrute de la experiencia multimedia y en la calidad, relevancia y realismo de la misma percibidos por el usuario. Asimismo, los servicios basados en dos pantallas y la manipulación directa de (elementos en) la escena de video tienen el potencial de mejorar la comprensión, la concentración y la implicación proactiva del usuario en la experiencia multimedia. El deporte se encuentra entre los géneros con mayor potencial para integrar y explotar éstas soluciones tecnológicas. Trabajos previos han demostrado asimismo la viabilidad técnica de integrar éstas tecnologías con los estándares actualmente adoptados a lo largo de toda la cadena de transmisión de televisión. De este modo, los sistemas multimedia enriquecidos con efectos sensoriales, los servicios interactivos multiplataforma y un mayor control del usuario sobre la escena de vídeo emergen como nuevas formas de llevar la multimedia immersiva e interactiva al mercado de consumo de forma no disruptiva. Sin embargo, existen numerosas interrogantes relativas a los efectos sensoriales y/o soluciones interactivas más adecuadas para complementar un contenido audiovisual determinado o a la mejor manera de de integrar y combinar dichos componentes para mejorar la experiencia de usuario de un segmento de audiencia objetivo. Además, la evidencia científica sobre el impacto de factores humanos en la experiencia de usuario con estas nuevas formas de immersión e interacción en el contexto multimedia es aún insuficiente y en ocasiones, contradictoria. Así, el papel de éstos factores en el potencial de adopción de éstas tecnologías ha sido amplia-mente ignorado. La presente tesis analiza el impacto del audio binaural, efectos sensoriales (de iluminación y olfativos), interacción con objetos 3D integrados en la escena de vídeo e interacción con contenido adicional utilizando una segunda pantalla en la experiencia de usuario con contenidos de deporte. La posible influencia de dichos componentes en las variables dependientes se explora tanto a nivel global (efecto promedio) como en función de las características de los usuarios (efectos heterogéneos). Para ello, se ha llevado a cabo un experimento con usuarios orientado a explorar la influencia de éstos componentes immersivos e interactivos en dos grandes dimensiones de la experiencia multimedia: calidad y Presencia. La calidad de la experiencia multimedia se analiza en términos de las posibles variaciones asociadas a la calidad global y a la calidad del contenido, la imagen, el audio, los efectos sensoriales, la interacción con objetos 3D y la interacción con la segunda pantalla. El posible impacto en la Presencia considera dos de las dimensiones definidas por el cuestionario ITC-SOPI: Presencia Espacial (Spatial Presence) e Implicación (Engagement). Por último, los individuos son caracterizados teniendo en cuenta los siguientes atributos afectivos, cognitivos y conductuales: preferencias y hábitos en relación con el contenido, grado de conocimiento de las tecnologías integradas en el sistema, tendencia a involucrarse emocionalmente, tendencia a concentrarse en una actividad bloqueando estímulos externos y los cinco grandes rasgos de la personalidad: extroversión, amabilidad, responsabilidad, inestabilidad emocional y apertura a nuevas experiencias. A nivel global, nuestro estudio revela que los participantes prefieren el audio binaural frente al sistema estéreo y que los efectos sensoriales generan un aumento significativo del nivel de Presencia Espacial percibido por los usuarios. Además, las manipulaciones experimentales realizadas permitieron identificar una gran variedad de efectos heterogéneos. Un resultado interesante es que dichos efectos no se encuentran distribuidos de forma equitativa entre las medidas de calidad y Presencia. Nuestros datos revelan un impacto generalizado del audio binaural en la mayoría de las medidas de calidad y Presencia analizadas. En cambio, la influencia de los efectos sensoriales y de la interacción con la segunda pantalla se concentran en las medidas de Presencia y calidad, respectivamente. La magnitud de los efectos heterogéneos identificados está modulada por las siguientes características personales: preferencias en relación con el contenido, frecuencia con la que el usuario suele ver contenido similar, conocimiento de las tecnologías integradas en el demostrador, sexo, tendencia a involucrarse emocionalmente, tendencia a a concentrarse en una actividad bloqueando estímulos externos y niveles de amabilidad, responsabilidad y apertura a nuevas experiencias. Las características personales consideradas en nuestro experimento explicaron la mayor parte de la variación en las variables dependientes, confirmando así el importante (y frecuentemente ignorado) papel de las diferencias individuales en la experiencia multimedia. Entre las características de los usuarios con un impacto más generalizado se encuentran las preferencias en relación con el contenido, el grado de conocimiento de las tecnologías integradas en el sistema y la tendencia a involucrarse emocionalmente. En particular, los primeros dos factores parecen generar un conflicto de atención hacia el contenido versus las características/elementos técnicos del sistema, respectivamente. Asimismo, la experiencia multimedia de los fans del fútbol parece estar modulada por procesos emociona-les, mientras que para los no-fans predominan los procesos cognitivos, en particular aquellos directamente relacionados con la percepción de calidad. Abstract Recent studies encourage the integration of multi-sensorial stimuli into multimedia assets to enhance the user experience by stimulating other senses beyond sight and hearing. Similarly, the introduction of multi-modal interaction components complementing with new features, functionalities and/or information the multimedia experience is promoted. Sensory effects as odor, wind, vibration and light effects, as well as an enhanced audio quality, have been found to favour media enjoyment and to have a positive influence on the sense of Presence and on the perceived quality, relevance and reality of a multimedia experience. Two-screen services and a direct manipulation of (elements in) the video scene have the potential to enhance user comprehension, engagement and proactive involvement of/in the media experience. Sports is among the genres that could benefit the most from these solutions. Previous works have demonstrated the technical feasibility of implementing and deploying end-to-end solutions integrating these technologies into legacy systems. Thus, sensorially-enhanced media, two-screen services and an increased user control over the displayed scene emerge as means to deliver a new form of immersive and interactive media experiences to the mass market in a non-disruptive manner. However, many questions remain concerning issues as the specific interactive solutions or sensory effects that can better complement a given audiovisual content or the best way in which to integrate and combine them to enhance the user experience of a target audience segment. Furthermore, scientific evidence on the impact of human factors on the user experience with these new forms of immersive and interactive media is still insufficient and sometimes, contradictory. Thus, the role of these factors on the potential adoption of these technologies has been widely ignored. This thesis analyzes the impact of binaural audio, sensory (light and olfactory) effects, interaction with 3D objects integrated into the video scene and interaction with additional content using a second screen on the sports media experience. The potential influence of these components on the dependent variables is explored both at the overall level (average effect) and as a function of users’ characteristics (heterogeneous effects). To these aims, we conducted an experimental study exploring the influence of these immersive and interactive elements on the quality and Presence dimensions of the media experience. Along the quality dimension, we look for possible variations on the quality scores as-signed to the overall media experience and to the media components content, image, audio, sensory effects, interaction with 3D objects and interaction using the tablet device. The potential impact on Presence is analyzed by looking at two of the four dimensions defined by the ITC-SOPI questionnaire, namely Spatial Presence and Engagement. The users’ characteristics considered encompass the following personal affective, cognitive and behavioral attributes: preferences and habits in relation to the content, knowledge of the involved technologies, tendency to get emotionally involved and tendency to get absorbed in an activity and block out external distractors and the big five personality traits extraversion, agreeableness, conscientiousness, neuroticism and openness to experience. At the overall level, we found that participants preferred binaural audio than standard stereo audio and that sensory effects increase significantly the level of Spatial Presence. Several heterogeneous effects were also revealed as a result of our experimental manipulations. Interestingly, these effects were not equally distributed across the quality and Presence measures analyzed. Whereas binaural audio was foud to have an influence on the majority of the quality and Presence measures considered, the effects of sensory effects and of interaction with additional content through the tablet device concentrate mainly on the dimensions of Presence and on quality measures, respectively. The magnitude of these effects was modulated by individual’s characteristics, such as: preferences in relation to the content, frequency of viewing similar content, knowledge of involved technologies, gender, tendency to get emotionally involved, tendency to absorption and levels of agreeableness, conscientiousness and openness to experience. The personal characteristics collected in our experiment explained most of the variation in the dependent variables, confirming the frequently neglected role of individual differences on the media experience. Preferences in relation to the content, knowledge of involved technologies and tendency to get emotionally involved were among the user variables with the most generalized influence. In particular, the former two features seem to present a conflict in the allocation of attentional resources towards the media content versus the technical features of the system, respectively. Additionally, football fans’ experience seems to be modulated by emotional processes whereas for not fans, cognitive processes (and in particular those related to quality judgment) prevail.