998 resultados para media players
Resumo:
Movie distribution on the Internet has become more common in recent years along with fast broadband internet connections. The problem so far has been that the greatest part of movie distribution on the Internet has been illegal. This is about to change because the major film distributors are finally starting to rent and sell movies more and more on the Internet due to their growing confidence in new copy protection methods. The importance of movie online distribution to the movie industry is still tiny but it is increasing rapidly as is investing in new business models and distribution methods in the USA and Europe. This thesis examines the basic concepts of online movie distribution, such as distribution techniques and copy protection, the main companies that rent and sell movies on the internet and their business models, the effects of movie piracy and non-commercial distribution channels. The intention was to provide the reader with an overview of different aspects of movie distribution on the Internet and its future. The conclusion was that movie distribution on the Internet will play a bigger financial part in the future although it was still too early to say just how significant that will be. We will probably see many corresponding distribution techniques, like peer-to-peer networks and streaming servers distributing and broadcasting movies to different end-user platforms like television, PC and portable media players. Internet distribution of movies will not revolutionize movie distribution in the next couple of years but it will make possible new efficient and inexpensive ways to distribute movies globally which will in turn increase the possibilities for revenue, especially for small independent movie producers and distributors.
Resumo:
Opinnäytetyössäni tarkastellaan Internet-elokuvan historiaa sekä nykyistä tilaa. Käyn läpi yleisimmät katseluohjelmat: QuickTime Player, Windows Media Player, RealPlayer ja Flash Player sekä Internet-elokuvasivustot: YouTube, IFILM, AtomFilm ja Pixoff.net. Kerron myös Flash-elokuvista ja siitä miten Internet on luonut kokonaan uuden elokuvan muodon sekä Internetissä julkaistuista Dogma 2001 The New Rules for Internet Cinema -säännöistä. Käsittelen myös lyhyesti miten Internet-elokuva on vaikuttanut perinteiseen elokuvaan. Käsittelen perinteisen, isolta ruudulta katsottavaksi tehdyn, elokuvan julkaisemista Internetissä. Tarkastelen kahta verkossa julkaistua elokuvaa: Star Wreck ja The Silent City ja käyn läpi miksi ne on julkaistu Internetissä. Star Wreck on tamperelainen poikkeuksellisen kunnianhimoinen amatööriprojekti. The Silent City on taas irlantilaisen 3D-animaattorin, Ruairi Robinsonin, lyhytelokuva, jonka hän on tehnyt työnäytteeksi Hollywoodiin. Käsittelen myös ohjaaja David Lynchin verkkosivut, koska ne ovat Internetissä ainoaa laatuaan ja mainio ympäristö elokuvan julkaisemiseen. Kerron oman Uhripuu-kauhuelokuvani synnystä ja tekoprosessista. Uhripuun idea sai alkunsa Iso-Britanniassa vuoden 2004 alussa. Käsikirjoitus sai lopullisen muotonsa keväällä 2006. Käsittelen miksi ja miten se julkaistaan Internetissä. Kerron elokuvan Internet-sivujen sisällöstä ja sen jaosta eri osiin: Tarina, Rooleissa, Tekijät, Uhripuista, Traileri, Näin tehtiin, Kuvagalleria, E-kortti, Näe/Katso elokuva. Koska olen saanut koulutukseni nimenomaan verkkoviestinnän enkä audiovisuaalisen viestinnän puolella, Uhripuu-elokuvan tekoprosessi on ollut minulle elokuvakoulu. Sen tekemisen aikana olen oppinut elokuvan tekemisestä paljon. Elokuva kuvattiin HD-tarkkuuksilla, jotta kuva ei kärsisi liikaa jälkikäsittelyssä. Mitä tulee elokuvan ja Internetin yhteiseen tulevaisuuteen, näyttää siltä, että nämä kaksi mediaa ovat yhdistymässä yhä enemmän toisiinsa.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
The Personal Health Assistant Project (PHA) is a pilot system implementation sponsored by the Kozani Region Governors’ Association (KRGA) and installed in one of the two major public hospitals of the city of Kozani. PHA is intended to demonstrate how a secure, networked, multipurpose electronic health and food benefits digital signage system can transform common TV sets inside patient homes or hospital rooms into health care media players and facilitate information sharing and improve administrative efficiency among private doctors, public health care providers, informal caregivers, and nutrition program private companies, while placing individual patients firmly in control of the information at hand. This case evaluation of the PHA demonstration is intended to provide critical information to other decision makers considering implementing PHA or related digital signage technology at other institutions and public hospitals around the globe.
Resumo:
This research looks at how the shift in the status of Egyptian bloggers from underground dissident voices to mainstream political and media players affected the plurality they add to the public space for discourse in Egypt’s authoritarian settings. The role of the internet – and more recently social media and bloggers – in democratic transition has been studied by various media scholars since the introduction of the worldwide web and especially after the Egyptian and Tunisian uprisings of 2011. But no work has been done to study how bringing those once-underground bloggers into the public and media spotlight affected the nature of the blogosphere and the bloggers themselves. Star bloggers were not only covered by the media after January 25th, 2011, they also started joining the media as column writers; a move that had various effects on them and the blogosphere but was never examined in media studies. The plurality the blogosphere adds to the Egyptian public space for discourse in light of those changes as well as in light of the financial and practical sustainability of blogging was hence never looked at in a context similar to Egypt’s. Guided by modified theories of the public sphere and theories of hegemony and manufacturing consent, I look at whether bloggers have been co-opted into the historical bloc in the process of renewing the social order and how this affects them and the online sphere. Also, guided by theories of power and media elites, I look at bloggers’ backgrounds to assess whether they come from power elites and are transforming into media elites, thus limiting the plurality of the online sphere. Finally, guided by theoretical works on institutionalizing and commercializing the internet, I look at how those shifts into mainstream affect the independence and freedom of the blogs and microblogs. The research uses a comparative study to assess how those changes affect prominent versus less prominent bloggers and compare their backgrounds. The study uses quantitative content analysis and framing analysis of chosen media outlets and interviews with bloggers, marketeers and media professionals. The findings trace an increase in media coverage of bloggers post January 25th, 2011, especially in the prominent bloggers category, and an overall positive framing of bloggers post the uprising. This led to the mainstreaming of bloggers into the media as well as public work, which had various implications on the freedom they had over their content and voice, both online and offline. It also points to a dramatic decrease in bloggers’ activity on their blogs in favour of mainstream and social media and due to star bloggers becoming more career-oriented and their failure to make blogs financially sustainable. The findings also indicate that more prominent bloggers seem to come from more elite backgrounds than others and enjoy luxuries that allow them the time, technology and security to post online. This research concludes that the shifts in bloggers’ status post-January 25th have limited the plurality they add to the discourse in Egypt.
Resumo:
This paper describes how MPEG-4 object based video (obv) can be used to allow selected objects to be inserted into the play-out stream to a specific user based on a profile derived for that user. The application scenario described here is for personalized product placement, and considers the value of this application in the current and evolving commercial media distribution market given the huge emphasis media distributors are currently placing on targeted advertising. This level of application of video content requires a sophisticated content description and metadata system (e.g., MPEG-7). The scenario considers the requirement for global libraries to provide the objects to be inserted into the streams. The paper then considers the commercial trading of objects between the libraries, video service providers, advertising agencies and other parties involved in the service. Consequently a brokerage of video objects is proposed based on negotiation and trading using intelligent agents representing the various parties. The proposed Media Brokerage Platform is a multi-agent system structured in two layers. In the top layer, there is a collection of coarse grain agents representing the real world players – the providers and deliverers of media contents and the market regulator profiler – and, in the bottom layer, there is a set of finer grain agents constituting the marketplace – the delegate agents and the market agent. For knowledge representation (domain, strategic and negotiation protocols) we propose a Semantic Web approach based on ontologies. The media components contents should be represented in MPEG-7 and the metadata describing the objects to be traded should follow a specific ontology. The top layer content providers and deliverers are modelled by intelligent autonomous agents that express their will to transact – buy or sell – media components by registering at a service registry. The market regulator profiler creates, according to the selected profile, a market agent, which, in turn, checks the service registry for potential trading partners for a given component and invites them for the marketplace. The subsequent negotiation and actual transaction is performed by delegate agents in accordance with their profiles and the predefined rules of the market.
Resumo:
Simpósio de Informática (INForum 2015), Covilhã, Portugal. Notes: Best paper award nominee.
Resumo:
The ever increasing popularity of social media makes it a promising source for the personalization of gameplay experiences. Furthermore, involving social network friends in a game can greatly enrich the satisfaction of the player and also attract potential novel players to a game. This master thesis describes a social overlay designed for desktop games, called GameNshare. It allows players to easily capture and share with multiple social networks game-related screenshots, videos and stories. Additionally, it also provides asynchronous multiplayer game mechanics to directly integrate social network friends in the game. GameNshare was designed to interact with the users in a non-intrusive way allowing them to be in complete control of what is shared. It prevents unsolicited sharing of messages, a key problem in social media integration tools, by the use of built-in message monitoring and anti-spam measures. GameNshare was specially designed for players aged from 18 to 25 years that are regular users of Twitter and Facebook. It was tested by a group of 10 individuals from the target age range that were surveyed to capture their insights on the use of the social overlay. The implemented GameNshare features were well accepted by the testers that were also useful in highlighting features for future development. GameNshare ultimate goal is to make players look and ask for social integration and allow them to take full advantage of their social communities to improve gaming experiences.
Resumo:
In this work project we discuss the advantages and disadvantages of social media as a marketing tool. Four international cases were analyzed to provide anecdotal evidence of how social and viral marketing have been used by four firms in very different industries. We reviewed empirical evidence on the topic to discuss the main components of viral marketing. We concluded that positive (electronic) word of mouth, short response time and seeding through high network value customers are the main drivers of the success of a viral marketing campaign. We also conducted a study of the Portuguese telecommunications industry, in particular, the mobile segment. We found that the three main players operating in this market have been using social media successfully as a marketing tool in a strategic approach to the 14-25 years old segment.
Resumo:
The study examined coaches' usage of text-based computer-mediated communication (CMC) media (e.g., text-messaging, email) in the coach-player relationship. Data were collected by surveying Ontario-based male baseball coaches (n = 86) who coached players between 15 and 18 years old. Predictions were made regarding how demographic factors such as age and coaching experience affected coaches' CMC use and opinions. Results indicated that over 76% of respondents never used any CMC media other than email and team websites in their interactions with players. Results also revealed that coaches' usage rates contrasted with their opinion of the usefulness of the media, and their perception of players' use of the media. Coaches characterized most CMC media as limited, unnecessary, and sometimes inappropriate. Additional research should explore players' CMC usage rates and possible guidelines for use of the new media in authority relationships. Academia needs to keep pace with the developments in this area.
Resumo:
This manuscript focuses on development assistance players’ efforts to cooperate, coordinate and collaborate on projects of mutual interest. I target the case of the cross-sectoral and international Media Issues Group designed to reform and develop the media sector in Bosnia and Herzegovina. I identify and categorize variables that influenced interorganizational relationships to summarize lessons learned and potentially inform similar interventions. This work suggests that cooperation, coordination and collaboration are constrained by contextual, strategic and procedural variables. Through participant narrative based on observation and interviews, this work clarifies the nuances within these three sets of variables for potential extrapolation to other settings. Perhaps more importantly, it provides lessons learned that can inform future international community interventions in market development activities.
Resumo:
El análisis del rendimiento en deportes juega un papel esencial en el fútbol profesional. Aunque el estudio del análisis del juego en fútbol se ha utilizado desde diferentes ámbitos y situaciones, todavía existen diferentes aspectos y componentes del juego que siguen sin estar estudiados. En este sentido existen diferentes aspectos que deben de superar los estudios previos centrados en el componente descriptivo tales como el uso de variables/ indicadores de rendimiento que no se han definido ni estudiado, la validez de los métodos observaciones que no han sido testados con los softwares específicos en fútbol, la aplicación y utilidad de los resultados, así como las limitaciones del estudio de las variables situacionales/contextuales. Con el objetivo de cubrir las citadas limitaciones se han diseñado 6 estudios independientes e inter-relacionados que tratan de estudiar los aspectos anteriormente referidos. El primer estudio evalua la fiabilidad inter-observadores de las estadísticas de juego de la empresa privada OPTA Sportsdata, estos datos son la muestra de estudio de la presente tesis doctoral. Dos grupos de observadores experimentados se requieren para analizar un partido de la liga española de manera independiente. Los resultados muestran que los eventos de equipos y porteros codificados por los inter-operadores alcanzan un acuerdo muy bueno (valores kappa entre 0.86 y 0.94). La validez inter-observadores de las acciones de juego y los datos de jugadores individuales se evaluó con elevados niveles de acuerdo (valores del coeficiente de correlación intraclase entre 0.88 hasta 1.00, el error típico estandarizado variaba entre 0.00 hasta 0.37). Los resultados sugieren que las estadísticas de juego registradas por los operadores de la empresa OPTA Sportsdata están bien entrenados y son fiables. El segundo, tercer y cuarto estudio se centran en resaltar la aplicabilidad del análisis de rendimiento en el fútbol así como para explicar en profundidad las influencias de las variables situacionales. Utilizando la técnica de los perfiles de rendimiento de jugadores y equipos de fútbol se puede evaluar y comparar de manera gráfica, fácil y visual. Así mismo, mediante esta técnica se puede controlar el efecto de las variables situacionales (localización del partido, nivel del equipo y del oponente, y el resultado final del partido). Los perfiles de rendimiento de porteros (n = 46 porteros, 744 observaciones) y jugadores de campo (n = 409 jugadores, 5288 observaciones) de la primera division professional de fútbol Española (La Liga, temporada 2012-13), los equipos (n = 496 partidos, 992 observaciones) de la UEFA Champions League (temporadas 2009-10 a 2012-13) fueron analizados registrando la media, desviación típica, mediana, cuartiles superior e inferior y el recuento de valores de cada indicador de rendimiento y evento, los cuales se presentaron en su forma tipificada y normalizada. Los valores medios de los porteros de los equipos de diferentes niveles de La Liga y de los equipos de diferente nivel de la UEFA Champions League cuando jugaban en diferentes contextos de juego y situaciones (variables situacionales) fueron comparados utilizando el ANOVA de un factor y la prueba t para muestras independientes (localización del partido, diferencias entre casa y fuera), y fueron establecidos en los perfiles de red después de unificar todos los registros en la misma escala derivada con valores estandarizados. Mientras que las diferencias de rendimiento entre los jugadores de los mejores equipos (Top3) y los peores (Bottom3) fueron comparados mediante el uso de diferencias en la magnitud del tamaño del efecto. El quinto y el sexto estudio analizaban el rendimiento del fútbol desde un punto de vista de predicción del rendimiento. El modelo linear general y el modelo lineal general mixto fue empleado para analizar la magnitud de las relaciones de los indicadores y estadísticas de juego con el resultado final del partido en función del tipo de partido (partidos ajustados o todos los partidos) en la fase de grupos de la Copa del Mundo 2014 de Brasil (n = 48 partidos, 38 partidos ajustados) y La Liga 2012-13 (n = 320 partidos ajustados). Las relaciones fueron evaluadas mediante las inferencias en la magnitud de las diferencias y se expresaron como partidos extra ganados o perdidos por cada 10 partidos mediante la variable calculada en 2 desviaciones típicas. Los resultados mostraron que, para los 48 partidos de la fase de grupos de la Copa del Mundo 2014, nueve variables tuvieron un efecto positive en la probabilidad de ganar (tiros, tiros a puerta, tiros de contraataque, tiros dentro del área, posesión de balón, pases en corto, media de secuencia de pases, duelos aéreos y entradas), cuatro tuvieron efectos negativos (tiros bloqueados, centros, regates y tarjetas amarillas), y otras 12 variables tenían efectos triviales o poco claros. Mientras que los 38 partidos ajustados, el efecto de duelos aéreos y tarjetas amarillas fueron triviales y claramente negativos respectivamente. En la La Liga, existió un efecto moderado positive para cada equipo para los tiros a puerta (3.4 victorias extras por cada 10 partidos; 99% IC ±1.0), y un efecto positivo reducido para tiros totales (1.7 victorias extrsa; ±1.0). Los efectos de la mayoría de los eventos se han relacionado con la posesión del balón, la cual obtuvo efectos negativos entre equipos (1.2 derrotas extras; ±1.0) pero un efecto positivo pequeño entra equipos (1.7 victorias extras; ±1.4). La localización del partido mostró un efecto positive reducido dentro de los equipos (1.9 victorias extras; ±0.9). Los resultados obtenidos en los perfiles y el modelado del rendimiento permiten ofrecer una información detallada y avanzada para el entrenamiento, la preparación previa a los partidos, el control de la competición y el análisis post-partido, así como la evaluación e identificación del talento de los jugadores. ABSTRACT Match performance analysis plays an important role in the modern professional football. Although the research in football match analysis is well-developed, there are still some issues and problems remaining in this field, which mainly include the lack of operational definitions of variables, reliability issues, applicability of the findings, the lack of contextual/situational variables, and focusing too much on descriptive and comparative analysis. In order to address these issues, six independent but related studies were conducted in the current thesis. The first study evaluated the inter-operator reliability of football match statistics from OPTA Sportsdata Company which is the data resourse of the thesis. Two groups of experienced operators were required to analyse a Spanish league match independently in the experiment. Results showed that team events and goalkeeper actions coded by independent operators reached a very good agreement (kappa values between 0.86 and 0.94). The inter-operator reliability of match actions and events of individual outfield players was also tested to be at a high level (intra-class correlation coefficients ranged from 0.88 to 1.00, standardised typical error varied from 0.00 to 0.37). These results suggest that the football match statistics collected by well-trained operators from OPTA Sportsdata Company are reliable. The second, third and fourth study aims to enhance the applicability of football match performance analysis and to explore deeply the influences of situational variables. By using a profiling technique, technical and tactical performances of football players and teams can be interpreted, evaluated and compared more easily and straightforwardly, meanwhile, influences and effects from situational variables (match location, strength of team and opposition, and match outcome) on the performances can be properly incorporated. Performance profiles of goalkeepers (n = 46 goalkeepers, 744 full match observations) and outfield players (n = 409 players, 5288 full match observations) from the Spanish First Division Professional Football League (La Liga, season 2012-13), teams (n = 496 matches, 992 observations) from UEFA Champions League (seasons 2009-10 to 2012-13) were set up by presenting the mean, standard deviation, median, lower and upper quartiles of the count values of each performance-related match action and event to represent their typical performances and spreads. Means of goalkeeper from different levels of team in La Liga and teams of different strength in UEFA Champions League when playing under different situational conditions were compared by using one-way ANOVA and independent sample t test (for match location, home and away differences), and were plotted into the same radar charts after unifying all the event counts by standardised score. While differences between the performances of outfield players from Top3 and from Bottom3 teams were compared by magnitude-based inferences. The fifth and sixth study aims to move from the descriptive and comparative football match analysis to a more predictive one. Generalised linear modelling and generalised mixed linear modelling were undertaken to quantify relationships of the performance-related match events, actions and variables with the match outcome in different types of games (close games and all games) in the group stage of 2014 Brazil FIFA World Cup (n = 48 games, 38 close games) and La Liga 2012-13 (n = 320 close games). Relationships were evaluated with magnitude-based inferences and were expressed as extra matches won or lost per 10 matches for an increase of two standard deviations of a variable. Results showed that, for all the 48 games in the group stage of 2014 FIFA World Cup, nine variables had clearly positive effects on the probability of winning (shot, shot on target, shot from counter attack, shot from inside area, ball possession, short pass, average pass streak, aerial advantage, and tackle), four had clearly negative effects (shot blocked, cross, dribble and red card), other 12 variabless had either trivial or unclear effects. While for the 38 close games, the effects of aerial advantage and yellow card turned to trivial and clearly negative, respectively. In the La Liga, there was a moderate positive within-team effect from shots on target (3.4 extra wins per 10 matches; 99% confidence limits ±1.0), and a small positive within-team effect from total shots (1.7 extra wins; ±1.0). Effects of most other match events were related to ball possession, which had a small negative within-team effect (1.2 extra losses; ±1.0) but a small positive between-team effect (1.7 extra wins; ±1.4). Game location showed a small positive within-team effect (1.9 extra wins; ±0.9). Results from the established performance profiles and modelling can provide detailed and straightforward information for training, pre-match preparations, in-match tactical approaches and post-match evaluations, as well as for player identification and development. 摘要 比赛表现分析在现代足球中起着举足轻重的作用。尽管如今对足球比赛表现分析的研究已经相对完善,但仍有很多不足之处。这些不足主要体现在:研究中缺乏对研究变量的清晰定义、数据信效度缺失、研究结果的实用性受限、比赛情境因素缺失以及过于集中在描述性和对比性分析等。针对这些问题,本论文通过六个独立而又相互联系的研究,进一步对足球比赛表现分析进行完善。 第一个研究对本论文的数据源--OPTA Sportsdata公司的足球比赛数据的信效度进行了实验检验。实验中,两组数据收集人员被要求对同一场西班牙足球甲级联赛的比赛进行分析。研究结果显示,两组收集人员记录下的球队比赛事件和守门员比赛行为具有高度的一致性(卡帕系数介于0.86和0.94)。收集人员输出的外场球员的比赛行为和比赛事件也具有很高的组间一致性(ICC相关系数介于0.88和1.00,标准化典型误差介于0.00和0.37)。实验结果证明了OPTA Sportsdata公司收集的足球比赛数据具有足够高的信效度。 第二、三、四个研究旨在提升足球比赛表现分析研究结果的实用性以及深度探讨比赛情境因素对足球比赛表现的影响。通过对足球运动员和运动队的比赛技战术表现进行档案创建,可以对运动员和运动队的比赛表现进行简直接而直观的呈现、评价和对比,同时,情境变量(比赛场地、球队和对手实力、比赛结果)对比赛表现的影响也可以被整合到表现档案中。本部分对2012-13赛季西班牙足球甲级联赛的参赛守门员(n = 46球员人次,744比赛场次)和外场球员(n = 409球员人次, 5288比赛场次)以及2009-10至2012-13赛季欧洲足球冠军联赛的参赛球队(n = 496比赛场次)的比赛技战术表现进行了档案创建。在表现档案中,各项比赛技战术指标的均值、标准差、中位数和大小四分位数被用来展现守门员、外场球员和球队的普遍表现和表现浮动性。方差分析(ANOVA)被用来对西甲不同水平球队的守门员、欧冠中不同水平球队在不同比赛情境下的普遍表现(各项指标的均值)进行对比,独立样本t检验被用来对比主客场比赛普遍表现的差异。数据量级推断(magnitude-based inferences)的方法则被用来对西甲前三名和最后三名球队外场球员的普遍表现进行对比分析。所有来自不同水平球队的运动员和不同水平运动队的各项比赛指标皆被转换成了标准分数,从而能把他们在各种不同比赛情境下的普遍表现(各项比赛指标的均值)投到相同的雷达图中进行直观的对比。 第五和第六个研究目的在于进行预测性足球比赛表现分析,从而跨越之前固有的描述性和对比性分析。广义线性模型和广义混合线性模型被用来对2014年巴西世界杯小组赛(n = 48 比赛场次,38小分差场次)和2012-13赛季西甲联赛(n = 320小分差场次)的比赛中各表现相关比赛事件、行为和变量与比赛结果(胜、平、负)的关系进行建模。模型中的关系通过数据量级推断(magnitude-based inferences)的方法来界定,具体表现为某个变量增加两个标准差对比赛结果的影响(每10场比赛中额外取胜或失利的场数)。研究结果显示,在2014年巴西世界杯小组赛的所有48场比赛中,9个变量(射门、射正、反击中射门、禁区内射门、控球、短传、连续传球平均次数、高空球争抢成功率和抢断)与赢球概率有清晰的正相关关系,4个变量(射门被封堵、传中、过人和红牌)与赢球概率有清晰的负相关关系,其他12个被分析的变量与赢球概率的相关关系微小或不清晰。而在38场小分差比赛中,高空球争抢成功率由正相关变为微小关系,黄牌则由微小关系变为清晰的负相关。在西甲联赛中,每一支球队增加两个标准差的“射正球门”可以给每10场比赛带来3.4场额外胜利(99%置信区间±1.0场),而所有球队作为一个整体,每增加两个标准差的“射正球门”可以给每10场比赛带来1.7场额外胜利(99%置信区间±1.0场)。其他大多数比赛相关事件与比赛结果的相关关系与“控球”相关联。每一支球队增加两个标准差的“控球”将会给每10场比赛带来1.2场额外失利(99%置信区间±1.0场),而所有球队作为一个整体,每增加两个标准差的“控球”可以给每10场比赛带来1.7场额外胜利(99%置信区间±1.4场)。与客场比赛相对,主场能给球队带来1.9 /10场额外胜利(99%置信区间±0.9场)。 比赛表现档案和模型中得出的研究结果可以为俱乐部、足球队、教练组、表现分析师和运动员提供详细而直接的参考信息。这些信息可用于训练指导、赛前备战、赛中技战术调整和赛后技战术表现分析,也可运用于足球运动员选材、培养和发展。
Resumo:
In an increasingly interlinked and interdependent world, Europe and Asia are key players. Free trade agreements (FTAs), such as the ones the EU concluded with South Korea and Singapore, are indicative of strong mutual economic interests. It is therefore timely to take a closer look at the mutual perceptions of Asians and Europeans – not only at the governmental and policymaking levels, but also in terms of public opinion and the media. Drawing on data from an extensive research project led by the National Centre for Research on Europe (NCRE), New Zealand, the empirical study in this paper assesses the mutual perceptions of the EU/Europe and Asia, and their respective actors, focusing on two countries – Germany and Singapore. It seeks to do so through an analysis of the data collected from print and broadcast media, interviews with media practitioners, and the findings from public opinion surveys.
Resumo:
Who analyses children’s screen content and media use in Arab countries, and with what results? Children, defined internationally as under-18s, account for some 40 per cent of Arab populations and the proportion of under-fives is correspondingly large. Yet studies of children’s media and child audiences in the region are as scarce as truly popular locally produced media content aimed at children. At the very time when conflict and uncertainty in key Arab countries have made local development and diversification of children’s media more remote, it has become more urgent to gain a better understanding of how the next generation’s identities and world-views are formed. This interdisciplinary book is the first in English to probe both the state of Arab screen media for children and the practices of Arabic-speaking children in producing, as well as consuming, screen content. It responds to the gap in research by bringing together a holistic investigation of institutions and leading players, children’s media experiences and some iconic media texts. With children’s media increasingly linked to merchandising, which favours US-based global players and globalizing forces, this volume provides a timely insight into tensions between differing concepts of childhood and desirable media messages.