30 resultados para Set-piece
em Universidad Politécnica de Madrid
Resumo:
Performance of football teams varies constantly due to the dynamic nature of this sport, whilst the typical performance and its spread can be represented by profiles combining different performance-related variables based on data from multiple matches. The current study aims to use a profiling technique to evaluate and compare match performance of football teams in the UEFA Champions League incorporating three situational variables (i.e. strength of team and opponent, match outcome and match location). Match statistics of 72 teams, 496 games across four seasons (2008-09 to 2012-13) of this competition were analysed. Sixteen performance-related events were included: shots, shots on target, shots from open play, shots from set piece, shots from counter attack, passes, pass accuracy (%), crosses, through balls, corners, dribbles, possession, aerial success (%), fouls, tackles, and yellow cards. Teams were classified into three levels of strength by a k-cluster analysis. Profiles of overall performance and profiles incorporating three situational variables for teams of all three levels of strength were set up by presenting the mean, standard deviation, median, lower and upper quartiles of the counts of each event to represent their typical performances and spreads. Means were compared by using one-way ANOVA and independent sample t test (for match location, home and away differences), and were plotted into the same radar charts after unifying all the event counts by standardised score. Established profiles can present straightforwardly typical performances of football teams of different levels playing in different situations, which could provide detailed references for coaches and analysts to evaluate performances of upcoming opposition and of their own.
Resumo:
AnewRelativisticScreenedHydrogenicModel has been developed to calculate atomic data needed to compute the optical and thermodynamic properties of high energy density plasmas. The model is based on anewset of universal screeningconstants, including nlj-splitting that has been obtained by fitting to a large database of ionization potentials and excitation energies. This database was built with energies compiled from the National Institute of Standards and Technology (NIST) database of experimental atomic energy levels, and energies calculated with the Flexible Atomic Code (FAC). The screeningconstants have been computed up to the 5p3/2 subshell using a Genetic Algorithm technique with an objective function designed to minimize both the relative error and the maximum error. To select the best set of screeningconstants some additional physical criteria has been applied, which are based on the reproduction of the filling order of the shells and on obtaining the best ground state configuration. A statistical error analysis has been performed to test the model, which indicated that approximately 88% of the data lie within a ±10% error interval. We validate the model by comparing the results with ionization energies, transition energies, and wave functions computed using sophisticated self-consistent codes and experimental data.
Resumo:
In a series of attempts to research and document relevant sloshing type phenomena, a series of experiments have been conducted. The aim of this paper is to describe the setup and data processing of such experiments. A sloshing tank is subjected to angular motion. As a result pressure registers are obtained at several locations, together with the motion data, torque and a collection of image and video information. The experimental rig and the data acquisition systems are described. Useful information for experimental sloshing research practitioners is provided. This information is related to the liquids used in the experiments, the dying techniques, tank building processes, synchronization of acquisition systems, etc. A new procedure for reconstructing experimental data, that takes into account experimental uncertainties, is presented. This procedure is based on a least squares spline approximation of the data. Based on a deterministic approach to the first sloshing wave impact event in a sloshing experiment, an uncertainty analysis procedure of the associated first pressure peak value is described.
Resumo:
Advanced liver surgery requires a precise pre-operative planning, where liver segmentation and remnant liver volume are key elements to avoid post-operative liver failure. In that context, level-set algorithms have achieved better results than others, especially with altered liver parenchyma or in cases with previous surgery. In order to improve functional liver parenchyma volume measurements, in this work we propose two strategies to enhance previous level-set algorithms: an optimal multi-resolution strategy with fine details correction and adaptive curvature, as well as an additional semiautomatic step imposing local curvature constraints. Results show more accurate segmentations, especially in elongated structures, detecting internal lesions and avoiding leakages to close structures
Resumo:
We propose a level set based variational approach that incorporates shape priors into edge-based and region-based models. The evolution of the active contour depends on local and global information. It has been implemented using an efficient narrow band technique. For each boundary pixel we calculate its dynamic according to its gray level, the neighborhood and geometric properties established by training shapes. We also propose a criterion for shape aligning based on affine transformation using an image normalization procedure. Finally, we illustrate the benefits of the our approach on the liver segmentation from CT images.
Resumo:
Este proyecto está orientado al diseño y el acondicionamiento de una sala de cine siguiendo las normas establecidas por el SMPTE. El primer paso a realizar será el diseño de la sala en el cual habrá que tener en cuenta la distribución de los asientos dentro de la misma, el dimensionado de la pantalla que servirá para establecer la forma y dimensiones del recinto, así como la correcta ubicación del proyector. Posteriormente se realizará el acondicionamiento acústico del cine, con la elección de los diferentes materiales que permitan la obtención de un tiempo de reverberación óptimo. A continuación se procederá a la selección de los equipos electroacústicos más adecuados y a su colocación a lo largo de la sala para posteriormente realizar un estudio de todos los parámetros de esta para garantizar la perfecta escucha dentro de la misma. Se elegirán, al igual que se ha hecho con los elementos electroacústicos, los equipos de video específicos, teniendo en cuenta el sistema de proyección 3D utilizado y se procederá a su instalación dentro de la sala. Se indicará de forma independiente cual será el esquema de conexionado correspondiente a cada una de las partes, tanto de audio como de video. Todos los equipos y parámetros ajustables de la sala, tanto de audio como de video, se realizaran siguiendo las recomendaciones establecidas por el SMPTE para una correcta visión y escucha, así como también el diseño de la sala. Para llevar a cabo todo lo anteriormente descrito se utilizara el programa de simulación EASE 4.3 con él que se ajustaran los parámetros más significativos para verificar que la sala cumple con las condiciones de escucha que determina la norma. Todo esto irá acompañado de un presupuesto detallado de cada uno de los equipos y materiales utilizados, así como de los costes derivados de la mano de obra. Se adjuntarán también los planos de la sala donde se indicarán todas las medidas establecidas a lo largo del proyecto. Para la realización de estos se utilizara el programa de diseño Google SkechUp. Por último se facilitarán las hojas de características de cada uno de los equipos instalados en la sala para conocer sus especificaciones y modo de funcionamiento. Abstract This project is orientated at designing and conditioning a cinema according to standards set by the SMPTE. First of all, the cinema hall needs to be designed, taking into consideration seat distribution and screen dimension, in order to establish the shape and dimensions of the room and the correct location for the projector. Later the acoustic conditioning of the cinema is covered, with the choice of appropriate materials in order to permit an optimum reverberation time. The next step is the selection of the most appropriate electro-acoustic equipment and its positioning throughout the room. A study is then carried out of all the parameters to ensure perfect hearing in the cinema. Then the specific video equipment is chosen, bearing in mind the 3D projection system used and is installed in the theatre. A wiring diagram is indicated for each element used, for both audio and video. All equipment and adjustable parameters of the room, both audio and video, are made according to the recommendations established by the SMPTE for correct viewing and listening, as is the design of the cinema. To carry out the steps described above the EASE 4.3 simulation program is used. This program adjusts all significant parameters to verify that the room complies with the listening conditions determined by the standard. A detailed budget is included for all equipment and materials used, as well as the labour costs. Plans of the room, showing all measurements taken during the project are indicated. This is done using the Google SkechUp program. Finally data sheets are provided for each piece of equipment installed in the room detailing specifications and operating mode.
Resumo:
The Set-Sharing domain has been widely used to infer at compiletime interesting properties of logic programs such as occurs-check reduction, automatic parallelization, and flnite-tree analysis. However, performing abstract uniflcation in this domain requires a closure operation that increases the number of sharing groups exponentially. Much attention has been given to mitigating this key inefflciency in this otherwise very useful domain. In this paper we present a novel approach to Set-Sharing: we define a new representation that leverages the complement (or negative) sharing relationships of the original sharing set, without loss of accuracy. Intuitively, given an abstract state sh\> over the finite set of variables of interest V, its negative representation is p(V) \ shy. Using this encoding during analysis dramatically reduces the number of elements that need to be represented in the abstract states and during abstract uniflcation as the cardinality of the original set grows toward 2 . To further compress the number of elements, we express the set-sharing relationships through a set of ternary strings that compacts the representation by eliminating redundancies among the sharing sets. Our experiments show that our approach can compress the number of relationships, reducing signiflcantly the memory usage and running time of all abstract operations, including abstract uniflcation.
Resumo:
Abstract. We study the problem of efficient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a top-down framework. In particular, we define the new abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. We use cliques both as an alternative representation and as widening, defining several widening operators. Our experimental evaluation supports the conclusión that, for inferring set-sharing, as it was the case for inferring pair-sharing, precisión losses are limited, while useful efficieney gains are obtained. We also derive useful conclusions regarding the interactions between thresholds, precisión, efficieney and cost of widening. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.
Resumo:
Set-Sharing analysis, the classic Jacobs and Langen's domain, has been widely used to infer several interesting properties of programs at compile-time such as occurs-check reduction, automatic parallelization, flnite-tree analysis, etc. However, performing abstract uniflcation over this domain implies the use of a closure operation which makes the number of sharing groups grow exponentially. Much attention has been given in the literature to mitígate this key inefficiency in this otherwise very useful domain. In this paper we present two novel alternative representations for the traditional set-sharing domain, tSH and tNSH. which compress efficiently the number of elements into fewer elements enabling more efficient abstract operations, including abstract uniflcation, without any loss of accuracy. Our experimental evaluation supports that both representations can reduce dramatically the number of sharing groups showing they can be more practical solutions towards scalable set-sharing.
Resumo:
We study the problem of efñcient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a topdown framework. In particular, we define the abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. Our experimental evaluation supports the conclusión that, for inferring set-sharing, as it was the case for inferring pair-sharing, precisión losses are limited, while useful efñciency gains are obtained. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.
Resumo:
Abstract is not available.
Resumo:
Finding useful sharing information between instances in object- oriented programs has recently been the focus of much research. The applications of such static analysis are multiple: by knowing which variables definitely do not share in memory we can apply conventional compiler optimizations, find coarse-grained parallelism opportunities, or, more importantly, verify certain correctness aspects of programs even in the absence of annotations. In this paper we introduce a framework for deriving precise sharing information based on abstract interpretation for a Java-like language. Our analysis achieves precision in various ways, including supporting multivariance, which allows separating different contexts. We propose a combined Set Sharing + Nullity + Classes domain which captures which instances do not share and which ones are definitively null, and which uses the classes to refine the static information when inheritance is present. The use of a set sharing abstraction allows a more precise representation of the existing sharings and is crucial in achieving precision during interprocedural analysis. Carrying the domains in a combined way facilitates the interaction among them in the presence of multivariance in the analysis. We show through examples and experimentally that both the set sharing part of the domain as well as the combined domain provide more accurate information than previous work based on pair sharing domains, at reasonable cost.
Resumo:
Finding useful sharing information between instances in object- oriented programs has been recently the focus of much research. The applications of such static analysis are multiple: by knowing which variables share in memory we can apply conventional compiler optimizations, find coarse-grained parallelism opportunities, or, more importantly,erify certain correctness aspects of programs even in the absence of annotations In this paper we introduce a framework for deriving precise sharing information based on abstract interpretation for a Java-like language. Our analysis achieves precision in various ways. The analysis is multivariant, which allows separating different contexts. We propose a combined Set Sharing + Nullity + Classes domain which captures which instances share and which ones do not or are definitively null, and which uses the classes to refine the static information when inheritance is present. Carrying the domains in a combined way facilitates the interaction among the domains in the presence of mutivariance in the analysis. We show that both the set sharing part of the domain as well as the combined domain provide more accurate information than previous work based on pair sharing domains, at reasonable cost.
Resumo:
This doctoral thesis focuses on the modeling of multimedia systems to create personalized recommendation services based on the analysis of users’ audiovisual consumption. Research is focused on the characterization of both users’ audiovisual consumption and content, specifically images and video. This double characterization converges into a hybrid recommendation algorithm, adapted to different application scenarios covering different specificities and constraints. Hybrid recommendation systems use both content and user information as input data, applying the knowledge from the analysis of these data as the initial step to feed the algorithms in order to generate personalized recommendations. Regarding the user information, this doctoral thesis focuses on the analysis of audiovisual consumption to infer implicitly acquired preferences. The inference process is based on a new probabilistic model proposed in the text. This model takes into account qualitative and quantitative consumption factors on the one hand, and external factors such as zapping factor or company factor on the other. As for content information, this research focuses on the modeling of descriptors and aesthetic characteristics, which influence the user and are thus useful for the recommendation system. Similarly, the automatic extraction of these descriptors from the audiovisual piece without excessive computational cost has been considered a priority, in order to ensure applicability to different real scenarios. Finally, a new content-based recommendation algorithm has been created from the previously acquired information, i.e. user preferences and content descriptors. This algorithm has been hybridized with a collaborative filtering algorithm obtained from the current state of the art, so as to compare the efficiency of this hybrid recommender with the individual techniques of recommendation (different hybridization techniques of the state of the art have been studied for suitability). The content-based recommendation focuses on the influence of the aesthetic characteristics on the users. The heterogeneity of the possible users of these kinds of systems calls for the use of different criteria and attributes to create effective recommendations. Therefore, the proposed algorithm is adaptable to different perceptions producing a dynamic representation of preferences to obtain personalized recommendations for each user of the system. The hypotheses of this doctoral thesis have been validated by conducting a set of tests with real users, or by querying a database containing user preferences - available to the scientific community. This thesis is structured based on the different research and validation methodologies of the techniques involved. In the three central chapters the state of the art is studied and the developed algorithms and models are validated via self-designed tests. It should be noted that some of these tests are incremental and confirm the validation of previously discussed techniques. Resumen Esta tesis doctoral se centra en el modelado de sistemas multimedia para la creación de servicios personalizados de recomendación a partir del análisis de la actividad de consumo audiovisual de los usuarios. La investigación se focaliza en la caracterización tanto del consumo audiovisual del usuario como de la naturaleza de los contenidos, concretamente imágenes y vídeos. Esta doble caracterización de usuarios y contenidos confluye en un algoritmo de recomendación híbrido que se adapta a distintos escenarios de aplicación, cada uno de ellos con distintas peculiaridades y restricciones. Todo sistema de recomendación híbrido toma como datos de partida tanto información del usuario como del contenido, y utiliza este conocimiento como entrada para algoritmos que permiten generar recomendaciones personalizadas. Por la parte de la información del usuario, la tesis se centra en el análisis del consumo audiovisual para inferir preferencias que, por lo tanto, se adquieren de manera implícita. Para ello, se ha propuesto un nuevo modelo probabilístico que tiene en cuenta factores de consumo tanto cuantitativos como cualitativos, así como otros factores de contorno, como el factor de zapping o el factor de compañía, que condicionan la incertidumbre de la inferencia. En cuanto a la información del contenido, la investigación se ha centrado en la definición de descriptores de carácter estético y morfológico que resultan influyentes en el usuario y que, por lo tanto, son útiles para la recomendación. Del mismo modo, se ha considerado una prioridad que estos descriptores se puedan extraer automáticamente de un contenido sin exigir grandes requisitos computacionales y, de tal forma que se garantice la posibilidad de aplicación a escenarios reales de diverso tipo. Por último, explotando la información de preferencias del usuario y de descripción de los contenidos ya obtenida, se ha creado un nuevo algoritmo de recomendación basado en contenido. Este algoritmo se cruza con un algoritmo de filtrado colaborativo de referencia en el estado del arte, de tal manera que se compara la eficiencia de este recomendador híbrido (donde se ha investigado la idoneidad de las diferentes técnicas de hibridación del estado del arte) con cada una de las técnicas individuales de recomendación. El algoritmo de recomendación basado en contenido que se ha creado se centra en las posibilidades de la influencia de factores estéticos en los usuarios, teniendo en cuenta que la heterogeneidad del conjunto de usuarios provoca que los criterios y atributos que condicionan las preferencias de cada individuo sean diferentes. Por lo tanto, el algoritmo se adapta a las diferentes percepciones y articula una metodología dinámica de representación de las preferencias que permite obtener recomendaciones personalizadas, únicas para cada usuario del sistema. Todas las hipótesis de la tesis han sido debidamente validadas mediante la realización de pruebas con usuarios reales o con bases de datos de preferencias de usuarios que están a disposición de la comunidad científica. La diferente metodología de investigación y validación de cada una de las técnicas abordadas condiciona la estructura de la tesis, de tal manera que los tres capítulos centrales se estructuran sobre su propio estudio del estado del arte y los algoritmos y modelos desarrollados se validan mediante pruebas autónomas, sin impedir que, en algún caso, las pruebas sean incrementales y ratifiquen la validación de técnicas expuestas anteriormente.
Resumo:
Erosion potential and the effects of tillage can be evaluated from quantitative descriptions of soil surface roughness. The present study therefore aimed to fill the need for a reliable, low-cost and convenient method to measure that parameter. Based on the interpretation of micro-topographic shadows, this new procedure is primarily designed for use in the field after tillage. The principle underlying shadow analysis is the direct relationship between soil surface roughness and the shadows cast by soil structures under fixed sunlight conditions. The results obtained with this method were compared to the statistical indexes used to interpret field readings recorded by a pin meter. The tests were conducted on 4-m2 sandy loam and sandy clay loam plots divided into 1-m2 subplots tilled with three different tools: chisel, tiller and roller. The highly significant correlation between the statistical indexes and shadow analysis results obtained in the laboratory as well as in the field for all the soil?tool combinations proved that both variability (CV) and dispersion (SD) are accommodated by the new method. This procedure simplifies the interpretation of soil surface roughness and shortens the time involved in field operations by a factor ranging from 12 to 20.