54 resultados para Observational Methodology
Resumo:
The following paper introduces a new approach to the analysis of offensive game in football. Therefore, the main aim of this study was to create an instrument for collecting information for the analysis of offensive action and interactions game. The observation instrument that was used to accomplish the main objective of this work consists of a combination of format fields (FC) and systems of categories (SC). This methodology is a particular strategy of the scientific method that has as an objective to analyse the perceptible behaviour that occurs in habitual contexts, allowing them to be formally recorded and quantified and using an ad hoc instrument in order to obtain a behaviour systematic registration that, since they have been transformed in quantitative data with the necessary reliability and validity determined level, will allow analysis of the relations between these behaviours. The codifications undertaken to date in various games of football have shown that it serves the purposes for which it was developed, allowing more research into the offensive game methods in football.
Resumo:
Logistic regression is included into the analysis techniques which are valid for observationalmethodology. However, its presence at the heart of thismethodology, and more specifically in physical activity and sports studies, is scarce. With a view to highlighting the possibilities this technique offers within the scope of observational methodology applied to physical activity and sports, an application of the logistic regression model is presented. The model is applied in the context of an observational design which aims to determine, from the analysis of use of the playing area, which football discipline (7 a side football, 9 a side football or 11 a side football) is best adapted to the child"s possibilities. A multiple logistic regression model can provide an effective prognosis regarding the probability of a move being successful (reaching the opposing goal area) depending on the sector in which the move commenced and the football discipline which is being played.
Resumo:
The theoretical context of this study is related with the observational methodology in the context of group games and sports studies, specifically Handball. Thus, this study intends to analyze the performance of the pivot player in the World Cup 2007 - Germany, European 2008 - Norway 2008 and China OG 2008 in a qualitative dimension. Our purpose was to get as much information as possible about the whole activity of the pivot player, by identifying sequential patterns of behaviour or conduct of the player/game, by using the sequential analysis. The observation instrument used to meet the main purpose of this work consists of a combination of format fields (FF) and systems of categories (SC). The codifications undertaken occurred in several handball games. Using this instrument we have shown that it provides support for the purposes for which it was developed, allowing more research into the offensive process of handball. Besides this, it makes possible the analysis of aspects of the game through perspective and contextual sequences, which we consider to be more accurate, to fit the "reality" of a game such as handball.
Resumo:
L’objectiu principal d’aquest estudi ha estat comprovar quin nombre de contactes amb la pilota és més efectiu a l’hora d’executar l’acció del remat dins la zona de finalització. S’ha utilitzat la metodologia observacional, investigant en un context d’elit com són els partits realitzats per les quatre seleccions millor classificades a l’Eurocopa 2012. La investigació ha tingut en compte diferents variables del remat com el gol, el nombre de contactes amb la pilota abans del remat, l’orientació corporal del jugador que remata i la última acció abans que la pilota entri dins la zona de finalització per ser rematada. Els resultats obtinguts ens donen informació sobre com les seleccions de Portugal, Alemanya, Itàlia i Espanya finalitzaren les seves accions ofensives i permeten al lector fer-se una idea general de com hauria de realitzar-se un remat en la zona de finalització.
Resumo:
In the field of observational methodology the observer is obviously a central figure, and close attention should be paid to the process through which he or she acquires, applies, and maintains the skills required. Basic training in how to apply the operational definitions of categories and the rules for coding, coupled with the opportunity to use the observation instrument in real-life situations, can have a positive effect in terms of the degree of agreement achieved when one evaluates intra- and inter-observer reliability. Several authors, including Arias, Argudo, & Alonso (2009) and Medina and Delgado (1999), have put forward proposals for the process of basic and applied training in this context. Reid y De Master (1982) focuses on the observer's performance and how to maintain the acquired skills, it being argued that periodic checks are needed after initial training because an observer may, over time, become less reliable due to the inherent complexity of category systems. The purpose of this subsequent training is to maintain acceptable levels of observer reliability. Various strategies can be used to this end, including providing feedback about those categories associated with a good reliability index, or offering re-training in how to apply those that yield lower indices. The aim of this study is to develop a performance-based index that is capable of assessing an observer's ability to produce reliable observations in conjunction with other observers.
Resumo:
Does shareholder value orientation lead to shareholder value creation? This article proposes methods to quantify both, shareholder value orientation and shareholder value creation. Through the application of these models it is possible to quantify both dimensions and examine statistically in how far shareholder value orientation explains shareholder value creation. The scoring model developed in this paper allows quantifying the orientation of managers towards the objective to maximize wealth of shareholders. The method evaluates information that comes from the companies and scores the value orientation in a scale from 0 to 10 points. Analytically the variable value orientation is operationalized expressing it as the general attitude of managers toward the objective of value creation, investment policy and behavior, flexibility and further eight value drivers. The value creation model works with market data such as stock prices and dividend payments. Both methods where applied to a sample of 38 blue chip companies: 32 firms belonged to the share index IBEX 35 on July 1st, 1999, one company represents the “new economy” listed in the Spanish New Market as per July 1st, 2001, and 5 European multinational groups formed part of the EuroStoxx 50 index also on July 1st, 2001. The research period comprised the financial years 1998, 1999, and 2000. A regression analysis showed that between 15.9% and 23.4% of shareholder value creation can be explained by shareholder value orientation.
Resumo:
Fault tolerance has become a major issue for computer and software engineers because the occurrence of faults increases the cost of using a parallel computer. RADIC is the fault tolerance architecture for message passing systems which is transparent, decentralized, flexible and scalable. This master thesis presents the methodology used to implement the RADIC architecture over Open MPI, a well-know large-used message passing library. This implementation kept the RADIC architecture characteristics. In order to validate the implementation we have executed a synthetic ping program, besides, to evaluate the implementation performance we have used the NAS Parallel Benchmarks. The results prove that the RADIC architecture performance depends on the communication pattern of the parallel application which is running. Furthermore, our implementation proves that the RADIC architecture could be implemented over an existent message passing library.
Development of an optimized methodology for tensile testing of carbon steels in hydrogen environment
Resumo:
The study was performed at OCAS, the Steel Research Centre of ArcelorMittal for the Industry market. The major aim of this research was to obtain an optimized tensile testing methodology with in-situ H-charging to reveal the hydrogen embrittlement in various high strength steels. The second aim of this study has been the mechanical characterization of the hydrogen effect on hight strength carbon steels with varying microstructure, i.e. ferrite-martensite and ferrite-bainite grades. The optimal parameters for H-charging - which influence the tensile test results (sample geometry type of electrolyte, charging methods effect of steel type, etc.) - were defined and applied to Slow Strain Rate testing, Incremental Step Loading and Constant Load Testing. To better understand the initiation and propagation of cracks during tensile testing with in-situ H-charging, and to make the correlation with crystallographic orientation, some materials have been analyzed in the SEM in combination with the EBSD technique. The introduction of a notch on the tensile samples permits to reach a significantly improved reproducibility of the results. Comparing the various steel grades reveals that Dual Phase (ferrite-martensite) steels are more sensitive to hydrogen induced cracking than the FB (ferritic-bainitic) ones. This higher sensitivity to hydrogen was found back in the reduced failure times, increased creep rates and enhanced crack initiation (SEM) for the Dual Phase steels in comparison with the FB steels.
Resumo:
This paper is to examine the proper use of dimensions and curve fitting practices elaborating on Georgescu-Roegen’s economic methodology in relation to the three main concerns of his epistemological orientation. Section 2 introduces two critical issues in relation to dimensions and curve fitting practices in economics in view of Georgescu-Roegen’s economic methodology. Section 3 deals with the logarithmic function (ln z) and shows that z must be a dimensionless pure number, otherwise it is nonsensical. Several unfortunate examples of this analytical error are presented including macroeconomic data analysis conducted by a representative figure in this field. Section 4 deals with the standard Cobb-Douglas function. It is shown that the operational meaning cannot be obtained for capital or labor within the Cobb-Douglas function. Section 4 also deals with economists "curve fitting fetishism". Section 5 concludes thispaper with several epistemological issues in relation to dimensions and curve fitting practices in economics.
Resumo:
One feature of the modern nutrition transition is the growing consumption of animal proteins. The most common approach in the quantitative analysis of this change used to be the study of averages of food consumption. But this kind of analysis seems to be incomplete without the knowledge of the number of consumers. Data about consumers are not usually published in historical statistics. This article introduces a methodological approach for reconstructing consumer populations. This methodology is based on some assumptions about the diffusion process of foodstuffs and the modeling of consumption patterns with a log-normal distribution. This estimating process is illustrated with the specific case of milk consumption in Spain between 1925 and 1981. These results fit quite well with other data and indirect sources available showing that this dietary change was a slow and late process. The reconstruction of consumer population could shed a new light in the study of nutritional transitions.
Resumo:
Proposes a behavior-based scheme for high-level control of autonomous underwater vehicles (AUVs). Two main characteristics can be highlighted in the control scheme. Behavior coordination is done through a hybrid methodology, which takes in advantages of the robustness and modularity in competitive approaches, as well as optimized trajectories
Resumo:
When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system
Resumo:
A study of tin deposits from Priamurye (Russia) is performed to analyze the differencesbetween them based on their origin and also on commercial criteria. A particularanalysis based on their vertical zonality is also given for samples from Solnechnoedeposit. All the statistical analysis are made on the subcomposition formed by seventrace elements in cassiterite (In, Sc, Be, W, Nb, Ti and V) using the Aitchison’methodology of analysis of compositional data
Resumo:
It has been shown that the accuracy of mammographic abnormality detection methods is strongly dependent on the breast tissue characteristics, where a dense breast drastically reduces detection sensitivity. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. Here, we describe the development of an automatic breast tissue classification methodology, which can be summarized in a number of distinct steps: 1) the segmentation of the breast area into fatty versus dense mammographic tissue; 2) the extraction of morphological and texture features from the segmented breast areas; and 3) the use of a Bayesian combination of a number of classifiers. The evaluation, based on a large number of cases from two different mammographic data sets, shows a strong correlation ( and 0.67 for the two data sets) between automatic and expert-based Breast Imaging Reporting and Data System mammographic density assessment
Resumo:
The space and time discretization inherent to all FDTD schemesintroduce non-physical dispersion errors, i.e. deviations ofthe speed of sound from the theoretical value predicted bythe governing Euler differential equations. A generalmethodologyfor computing this dispersion error via straightforwardnumerical simulations of the FDTD schemes is presented.The method is shown to provide remarkable accuraciesof the order of 1/1000 in a wide variety of twodimensionalfinite difference schemes.