976 resultados para match analysis
Resumo:
The penalty kick in football is a seemingly simplistic play; however, it has increased in complexity since 1997 when the rules changed allowing goalkeepers to move laterally along their goal line before the ball was kicked. Prior to 1997 goalkeepers were required to remain still until the ball was struck. The objective of this study was to determine the importance of the penalty kick in the modern game of football. A retrospective study of the 2002, 2006 and 2010 World Cup and the 2000, 2004 and 2008 European Championship tournaments was carried out, assessing the importance of the penalty kick in match play and shootouts and the effect of the time of the game on the shooter's success rate. This study demonstrated the conversion rate of penalties was 73% in shootouts and 68% in match play. Significantly more penalties were awarded late in the game: twice as many penalties in the second half than the first and close to four times as many in the fourth quarter vs. the first. Teams awarded penalty kicks during match play won 52%, drew 30% and lost 18% of the time; chances of winning increased to 61% if the penalty was scored, but decreased to 29% if missed. Teams participating in either the World Cup or European Championship final match had roughly a 50% chance of being involved in a penalty shootout during the tournament. Penalty shots and their outcome significantly impact match results in post 1997 football.
Resumo:
Regulatory focus theory (RFT) proposes two different social-cognitive motivational systems for goal pursuit: a promotion system, which is organized around strategic approach behaviors and "making good things happen," and a prevention system, which is organized around strategic avoidance and "keeping bad things from happening." The promotion and prevention systems have been extensively studied in behavioral paradigms, and RFT posits that prolonged perceived failure to make progress in pursuing promotion or prevention goals can lead to ineffective goal pursuit and chronic distress (Higgins, 1997).
Research has begun to focus on uncovering the neural correlates of the promotion and prevention systems in an attempt to differentiate them at the neurobiological level. Preliminary research suggests that the promotion and prevention systems have both distinct and overlapping neural correlates (Eddington, Dolcos, Cabeza, Krishnan, & Strauman, 2007; Strauman et al., 2013). However, little research has examined how individual differences in regulatory focus develop and manifest. The development of individual differences in regulatory focus is particularly salient during adolescence, a crucial topic to explore given the dramatic neurodevelopmental and psychosocial changes that take place during this time, especially with regard to self-regulatory abilities. A number of questions remain unexplored, including the potential for goal-related neural activation to be modulated by (a) perceived proximity to goal attainment, (b) individual differences in regulatory orientation, specifically general beliefs about one's success or failure in attaining the two kinds of goals, (c) age, with a particular focus on adolescence, and (d) homozygosity for the Met allele of the catechol-O-methyltransferase (COMT) Val158Met polymorphism, a naturally occurring genotype which has been shown to impact prefrontal cortex activation patterns associated with goal pursuit behaviors.
This study explored the neural correlates of the promotion and prevention systems through the use of a priming paradigm involving rapid, brief, masked presentation of individually selected promotion and prevention goals to each participant while being scanned. The goals used as priming stimuli varied with regard to whether participants reported that they were close to or far away from achieving them (i.e. a "match" versus a "mismatch" representing perceived success or failure in personal goal pursuit). The study also assessed participants' overall beliefs regarding their relative success or failure in attaining promotion and prevention goals, and all participants were genotyped for the COMT Val158Met polymorphism.
A number of significant findings emerged. Both promotion and prevention priming were associated with activation in regions associated with self-referential cognition, including the left medial prefrontal cortex, cuneus, and lingual gyrus. Promotion and prevention priming were also associated with distinct patterns of neural activation; specifically, left middle temporal gyrus activation was found to be significantly greater during prevention priming. Activation in response to promotion and prevention goals was found to be modulated by self-reports of both perceived proximity to goal achievement and goal orientation. Age also had a significant effect on activation, such that activation in response to goal priming became more robust in the prefrontal cortex and in default mode network regions as a function of increasing age. Finally, COMT genotype also modulated the neural response to goal priming both alone and through interactions with regulatory focus and age. Overall, these findings provide further clarification of the neural underpinnings of the promotion and prevention systems as well as provide information about the role of development and individual differences at the personality and genetic level on activity in these neural systems.
Resumo:
Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.
Resumo:
We analyze four extreme AGN transients to explore the possibility that they are caused by rare, high-amplitude microlensing events. These previously unknown type-I AGN are located in the redshift range 0.6-1.1 and show changes of > 1.5 magnitudes in the g-band on a timescale of ~years. Multi-epoch optical spectroscopy, from the William Herschel Telescope, shows clear differential variability in the broad line fluxes with respect to the continuum changes and also evolution in the line profiles. In two cases a simple point-source, point-lens microlensing model provides an excellent match to the long-term variability seen in these objects. For both models the parameter constraints are consistent with the microlensing being due to an intervening stellar mass object but as yet there is no confirmation of the presence of an intervening galaxy. The models predict a peak amplification of 10.3/13.5 and an Einstein timescale of 7.5/10.8 years respectively. In one case the data also allow constraints on the size of the CIII] emitting region, with some simplifying assumptions, to to be ~1.0-6.5 light-days and a lower limit on the size of the MgII emitting region to be > 9 light-days (half-light radii). This CIII] radius is perhaps surprisingly small. In the remaining two objects there is spectroscopic evidence for an intervening absorber but the extra structure seen in the lightcurves requires a more complex lensing scenario to adequately explain.
Resumo:
Modern automobiles are no longer just mechanical tools. The electronics and computing services they are shipping with are making them not less than a computer. They are massive kinetic devices with sophisticated computing power. Most of the modern vehicles are made with the added connectivity in mind which may be vulnerable to outside attack. Researchers have shown that it is possible to infiltrate into a vehicle’s internal system remotely and control the physical entities such as steering and brakes. It is quite possible to experience such attacks on a moving vehicle and unable to use the controls. These massive connected computers can be life threatening as they are related to everyday lifestyle. First part of this research studied the attack surfaces in the automotive cybersecurity domain. It also illustrated the attack methods and capabilities of the damages. Online survey has been deployed as data collection tool to learn about the consumers’ usage of such vulnerable automotive services. The second part of the research portrayed the consumers’ privacy in automotive world. It has been found that almost hundred percent of modern vehicles has the capabilities to send vehicle diagnostic data as well as user generated data to their manufacturers, and almost thirty five percent automotive companies are collecting them already. Internet privacy has been studies before in many related domain but no privacy scale were matched for automotive consumers. It created the research gap and motivation for this thesis. A study has been performed to use well established consumers privacy scale – IUIPC to match with the automotive consumers’ privacy situation. Hypotheses were developed based on the IUIPC model for internet consumers’ privacy and they were studied by the finding from the data collection methods. Based on the key findings of the research, all the hypotheses were accepted and hence it is found that automotive consumers’ privacy did follow the IUIPC model under certain conditions. It is also found that a majority of automotive consumers use the services and devices that are vulnerable and prone to cyber-attacks. It is also established that there is a market for automotive cybersecurity services and consumers are willing to pay certain fees to avail that.
Resumo:
Valveless pulsejets are extremely simple aircraft engines; essentially cleverly designed tubes with no moving parts. These engines utilize pressure waves, instead of machinery, for thrust generation, and have demonstrated thrust-to-weight ratios over 8 and thrust specific fuel consumption levels below 1 lbm/lbf-hr – performance levels that can rival many gas turbines. Despite their simplicity and competitive performance, they have not seen widespread application due to extremely high noise and vibration levels, which have persisted as an unresolved challenge primarily due to a lack of fundamental insight into the operation of these engines. This thesis develops two theories for pulsejet operation (both based on electro-acoustic analogies) that predict measurements better than any previous theory reported in the literature, and then uses them to devise and experimentally validate effective noise reduction strategies. The first theory analyzes valveless pulsejets as acoustic ducts with axially varying area and temperature. An electro-acoustic analogy is used to calculate longitudinal mode frequencies and shapes for prescribed area and temperature distributions inside an engine. Predicted operating frequencies match experimental values to within 6% with the use of appropriate end corrections. Mode shapes are predicted and used to develop strategies for suppressing higher modes that are responsible for much of the perceived noise. These strategies are verified experimentally and via comparison to existing models/data for valveless pulsejets in the literature. The second theory analyzes valveless pulsejets as acoustic systems/circuits in which each engine component is represented by an acoustic impedance. These are assembled to form an equivalent circuit for the engine that is solved to find the frequency response. The theory is used to predict the behavior of two interacting pulsejet engines. It is validated via comparison to experiment and data in the literature. The technique is then used to develop and experimentally verify a method for operating two engines in anti-phase without interfering with thrust production. Finally, Helmholtz resonators are used to suppress higher order modes that inhibit noise suppression via anti-phasing. Experiments show that the acoustic output of two resonator-equipped pulsejets operating in anti-phase is 9 dBA less than the acoustic output of a single pulsejet.
Resumo:
Microsecond long Molecular Dynamics (MD) trajectories of biomolecular processes are now possible due to advances in computer technology. Soon, trajectories long enough to probe dynamics over many milliseconds will become available. Since these timescales match the physiological timescales over which many small proteins fold, all atom MD simulations of protein folding are now becoming popular. To distill features of such large folding trajectories, we must develop methods that can both compress trajectory data to enable visualization, and that can yield themselves to further analysis, such as the finding of collective coordinates and reduction of the dynamics. Conventionally, clustering has been the most popular MD trajectory analysis technique, followed by principal component analysis (PCA). Simple clustering used in MD trajectory analysis suffers from various serious drawbacks, namely, (i) it is not data driven, (ii) it is unstable to noise and change in cutoff parameters, and (iii) since it does not take into account interrelationships amongst data points, the separation of data into clusters can often be artificial. Usually, partitions generated by clustering techniques are validated visually, but such validation is not possible for MD trajectories of protein folding, as the underlying structural transitions are not well understood. Rigorous cluster validation techniques may be adapted, but it is more crucial to reduce the dimensions in which MD trajectories reside, while still preserving their salient features. PCA has often been used for dimension reduction and while it is computationally inexpensive, being a linear method, it does not achieve good data compression. In this thesis, I propose a different method, a nonmetric multidimensional scaling (nMDS) technique, which achieves superior data compression by virtue of being nonlinear, and also provides a clear insight into the structural processes underlying MD trajectories. I illustrate the capabilities of nMDS by analyzing three complete villin headpiece folding and six norleucine mutant (NLE) folding trajectories simulated by Freddolino and Schulten [1]. Using these trajectories, I make comparisons between nMDS, PCA and clustering to demonstrate the superiority of nMDS. The three villin headpiece trajectories showed great structural heterogeneity. Apart from a few trivial features like early formation of secondary structure, no commonalities between trajectories were found. There were no units of residues or atoms found moving in concert across the trajectories. A flipping transition, corresponding to the flipping of helix 1 relative to the plane formed by helices 2 and 3 was observed towards the end of the folding process in all trajectories, when nearly all native contacts had been formed. However, the transition occurred through a different series of steps in all trajectories, indicating that it may not be a common transition in villin folding. The trajectories showed competition between local structure formation/hydrophobic collapse and global structure formation in all trajectories. Our analysis on the NLE trajectories confirms the notion that a tight hydrophobic core inhibits correct 3-D rearrangement. Only one of the six NLE trajectories folded, and it showed no flipping transition. All the other trajectories get trapped in hydrophobically collapsed states. The NLE residues were found to be buried deeply into the core, compared to the corresponding lysines in the villin headpiece, thereby making the core tighter and harder to undo for 3-D rearrangement. Our results suggest that the NLE may not be a fast folder as experiments suggest. The tightness of the hydrophobic core may be a very important factor in the folding of larger proteins. It is likely that chaperones like GroEL act to undo the tight hydrophobic core of proteins, after most secondary structure elements have been formed, so that global rearrangement is easier. I conclude by presenting facts about chaperone-protein complexes and propose further directions for the study of protein folding.
Resumo:
Many existing encrypted Internet protocols leak information through packet sizes and timing. Though seemingly innocuous, prior work has shown that such leakage can be used to recover part or all of the plaintext being encrypted. The prevalence of encrypted protocols as the underpinning of such critical services as e-commerce, remote login, and anonymity networks and the increasing feasibility of attacks on these services represent a considerable risk to communications security. Existing mechanisms for preventing traffic analysis focus on re-routing and padding. These prevention techniques have considerable resource and overhead requirements. Furthermore, padding is easily detectable and, in some cases, can introduce its own vulnerabilities. To address these shortcomings, we propose embedding real traffic in synthetically generated encrypted cover traffic. Novel to our approach is our use of realistic network protocol behavior models to generate cover traffic. The observable traffic we generate also has the benefit of being indistinguishable from other real encrypted traffic further thwarting an adversary's ability to target attacks. In this dissertation, we introduce the design of a proxy system called TrafficMimic that implements realistic cover traffic tunneling and can be used alone or integrated with the Tor anonymity system. We describe the cover traffic generation process including the subtleties of implementing a secure traffic generator. We show that TrafficMimic cover traffic can fool a complex protocol classification attack with 91% of the accuracy of real traffic. TrafficMimic cover traffic is also not detected by a binary classification attack specifically designed to detect TrafficMimic. We evaluate the performance of tunneling with independent cover traffic models and find that they are comparable, and, in some cases, more efficient than generic constant-rate defenses. We then use simulation and analytic modeling to understand the performance of cover traffic tunneling more deeply. We find that we can take measurements from real or simulated traffic with no tunneling and use them to estimate parameters for an accurate analytic model of the performance impact of cover traffic tunneling. Once validated, we use this model to better understand how delay, bandwidth, tunnel slowdown, and stability affect cover traffic tunneling. Finally, we take the insights from our simulation study and develop several biasing techniques that we can use to match the cover traffic to the real traffic while simultaneously bounding external information leakage. We study these bias methods using simulation and evaluate their security using a Bayesian inference attack. We find that we can safely improve performance with biasing while preventing both traffic analysis and defense detection attacks. We then apply these biasing methods to the real TrafficMimic implementation and evaluate it on the Internet. We find that biasing can provide 3-5x improvement in bandwidth for bulk transfers and 2.5-9.5x speedup for Web browsing over tunneling without biasing.
Resumo:
Raman spectroscopy of formamide-intercalated kaolinites treated using controlled-rate thermal analysis technology (CRTA), allowing the separation of adsorbed formamide from intercalated formamide in formamide-intercalated kaolinites, is reported. The Raman spectra of the CRTA-treated formamide-intercalated kaolinites are significantly different from those of the intercalated kaolinites, which display a combination of both intercalated and adsorbed formamide. An intense band is observed at 3629 cm-1, attributed to the inner surface hydroxyls hydrogen bonded to the formamide. Broad bands are observed at 3600 and 3639 cm-1, assigned to the inner surface hydroxyls, which are hydrogen bonded to the adsorbed water molecules. The hydroxyl-stretching band of the inner hydroxyl is observed at 3621 cm-1 in the Raman spectra of the CRTA-treated formamide-intercalated kaolinites. The results of thermal analysis show that the amount of intercalated formamide between the kaolinite layers is independent of the presence of water. Significant differences are observed in the CO stretching region between the adsorbed and intercalated formamide.
Resumo:
Diffusion equations that use time fractional derivatives are attractive because they describe a wealth of problems involving non-Markovian Random walks. The time fractional diffusion equation (TFDE) is obtained from the standard diffusion equation by replacing the first-order time derivative with a fractional derivative of order α ∈ (0, 1). Developing numerical methods for solving fractional partial differential equations is a new research field and the theoretical analysis of the numerical methods associated with them is not fully developed. In this paper an explicit conservative difference approximation (ECDA) for TFDE is proposed. We give a detailed analysis for this ECDA and generate discrete models of random walk suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation. The stability and convergence of the ECDA for TFDE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.
Resumo:
The time for conducting Preventive Maintenance (PM) on an asset is often determined using a predefined alarm limit based on trends of a hazard function. In this paper, the authors propose using both hazard and reliability functions to improve the accuracy of the prediction particularly when the failure characteristic of the asset whole life is modelled using different failure distributions for the different stages of the life of the asset. The proposed method is validated using simulations and case studies.