898 resultados para Discrete Regression and Qualitative Choice Models
Resumo:
El MC en baloncesto es aquel fenómeno relacionado con el juego que presenta unas características particulares determinadas por la idiosincrasia de un equipo y puede afectar a los protagonistas y por ende al devenir del juego. En la presente Tesis se ha estudiado la incidencia del MC en Liga A.C.B. de baloncesto y para su desarrollo en profundidad se ha planteado dos investigaciones una cuantitativa y otra cualitativa cuya metodología se detalla a continuación: La investigación cuantitativa se ha basado en la técnica de estudio del “Performance analysis”, para ello se han estudiado cuatro temporadas de la Liga A.C.B. (del 2007/08 al 2010/11), tal y como refleja en la bibliografía consultada se han tomado como momentos críticos del juego a los últimos cinco minutos de partidos donde la diferencia de puntos fue de seis puntos y todos los Tiempos Extras disputados, de tal manera que se han estudiado 197 momentos críticos. La contextualización del estudio se ha hecho en función de la variables situacionales “game location” (local o visitante), “team quality” (mejores o peores clasificados) y “competition” (fases de LR y Playoff). Para la interpretación de los resultados se han realizado los siguientes análisis descriptivos: 1) Análisis Discriminante, 2) Regresión Lineal Múltiple; y 3) Análisis del Modelo Lineal General Multivariante. La investigación cualitativa se ha basado en la técnica de investigación de la entrevista semiestructurada. Se entrevistaron a 12 entrenadores que militaban en la Liga A.C.B. durante la temporada 2011/12, cuyo objetivo ha sido conocer el punto de vista que tiene el entrenador sobre el concepto del MC y que de esta forma pudiera dar un enfoque más práctico basado en su conocimiento y experiencia acerca de cómo actuar ante el MC en el baloncesto. Los resultados de ambas investigaciones coinciden en señalar la importancia del MC sobre el resultado final del juego. De igual forma, el concepto en sí entraña una gran complejidad por lo que se considera fundamental la visión científica de la observación del juego y la percepción subjetiva que presenta el entrenador ante el fenómeno, para la cual los aspectos psicológicos de sus protagonistas (jugadores y entrenadores) son determinantes. ABSTRACT The Critical Moment (CM) in basketball is a related phenomenon with the game that has particular features determined by the idiosyncrasies of a team and can affect the players and therefore the future of the game. In this Thesis we have studied the impact of CM in the A.C.B. League and from a profound development two investigations have been raised, quantitative and qualitative whose methodology is as follows: The quantitative research is based on the technique of study "Performance analysis", for this we have studied four seasons in the A.C.B. League (2007/08 to 2010/11), and as reflected in the literature the Critical Moments of the games were taken from the last five minutes of games where the point spread was six points and all overtimes disputed, such that 197 critical moments have been studied. The contextualization of the study has been based on the situational variables "game location" (home or away), "team quality" (better or lower classified) and "competition" (LR and Playoff phases). For the interpretation of the results the following descriptive analyzes were performed: 1) Discriminant Analysis, 2) Multiple Linear Regression Analysis; and 3) Analysis of Multivariate General Linear Model. Qualitative research is based on the technique of investigation of a semi-structured interview. 12 coaches who belonged to the A.C.B. League were interviewed in seasons 2011/12, which aimed to determine the point of view that the coach has on the CM concept and thus could give a more practical approach based on their knowledge and experience about how to deal with the CM in basketball. The results of both studies agree on the importance of the CM on the final outcome of the game. Similarly, the concept itself is highly complex so the scientific view of the observation of the game is considered essential as well as the subjective perception the coach presents before the phenomenon, for which the psychological aspects of their characters (players and coaches) are crucial.
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
Cover title.
Resumo:
"Report no.: FHWA/IL/RC-008."--Documentation page.
Resumo:
This paper complements the preceding one by Clarke et al, which looked at the long-term impact of retail restructuring on consumer choice at the local level. Whereas the previous paper was based on quantitative evidence from survey research, this paper draws on the qualitative phases of the same three-year study, and in it we aim to understand how the changing forms of retail provision are experienced at the neighbourhood and household level. The empirical material is drawn from focus groups, accompanied shopping trips, diaries, interviews, and kitchen visits with eight households in two contrasting neighbourhoods in the Portsmouth area. The data demonstrate that consumer choice involves judgments of taste, quality, and value as well as more ‘objective’ questions of convenience, price, and accessibility. These judgments are related to households’ differential levels of cultural capital and involve ethical and moral considerations as well as more mundane considerations of practical utility. Our evidence suggests that many of the terms that are conventionally advanced as explanations of consumer choice (such as ‘convenience’, ‘value’, and ‘habit’) have very different meanings according to different household circumstances. To understand these meanings requires us to relate consumers’ at-store behaviour to the domestic context in which their consumption choices are embedded. Bringing theories of practice to bear on the nature of consumer choice, our research demonstrates that consumer choice between stores can be understood in terms of accessibility and convenience, whereas choice within stores involves notions of value, price, and quality. We also demonstrate that choice between and within stores is strongly mediated by consumers’ household contexts, reflecting the extent to which shopping practices are embedded within consumers’ domestic routines and complex everyday lives. The paper concludes with a summary of the overall findings of the project, and with a discussion of the practical and theoretical implications of the study.
Resumo:
Over the last two decades fundamental changes have taken place in the global supply and local structure of provision of British food retailing. Consumer lifestyles have also changed markedly. Despite some important studies of local interactions between new retail developments and consumers, we argue in this paper that there is a critical need to gauge the cumulative effects of these changes on consumer behaviour over longer periods. In this, the first of two papers, we present the main findings of a study of the effects of long-term retail change on consumers at the local level. We provide in this paper an overview of the changing geography of retail provision and patterns of consumption at the local level. We contextualise the Portsmouth study area as a locality that typifies national changes in retail provision and consumer lifestyles; outline the main findings of two large-scale surveys of food shopping behaviour carried out in 1980 and 2002; and reveal the impacts of retail restructuring on consumer behaviour. We focus in particular on choice between stores at the local level and end by problematising our understanding of how consumers experience choice, emphasising the need for qualitative research. This issue is then dealt with in our complementary second paper, which explores choice within stores and how this relates to the broader spatial context.
Resumo:
The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.
Resumo:
High velocity oxyfuel (HVOF) thermal spraying is one of the most significant developments in the thermal spray industry since the development of the original plasma spray technique. The first investigation deals with the combustion and discrete particle models within the general purpose commercial CFD code FLUENT to solve the combustion of kerosene and couple the motion of fuel droplets with the gas flow dynamics in a Lagrangian fashion. The effects of liquid fuel droplets on the thermodynamics of the combusting gas flow are examined thoroughly showing that combustion process of kerosene is independent on the initial fuel droplet sizes. The second analysis copes with the full water cooling numerical model, which can assist on thermal performance optimisation or to determine the best method for heat removal without the cost of building physical prototypes. The numerical results indicate that the water flow rate and direction has noticeable influence on the cooling efficiency but no noticeable effect on the gas flow dynamics within the thermal spraying gun. The third investigation deals with the development and implementation of discrete phase particle models. The results indicate that most powder particles are not melted upon hitting the substrate to be coated. The oxidation model confirms that HVOF guns can produce metallic coating with low oxidation within the typical standing-off distance about 30cm. Physical properties such as porosity, microstructure, surface roughness and adhesion strength of coatings produced by droplet deposition in a thermal spray process are determined to a large extent by the dynamics of deformation and solidification of the particles impinging on the substrate. Therefore, is one of the objectives of this study to present a complete numerical model of droplet impact and solidification. The modelling results show that solidification of droplets is significantly affected by the thermal contact resistance/substrate surface roughness.
Resumo:
The “food deserts” debate can be enriched by setting the particular circumstances of food deserts – areas of very limited consumer choice – within a wider context of changing retail provision in other areas. This paper’s combined focus on retail competition and consumer choice shifts the emphasis from changing patterns of retail provision towards a more qualitative understanding of how “choice” is actually experienced by consumers at the local level “on the ground”. This argument has critical implications for current policy debates where the emphasis on monopolies and mergers at the national level needs to be brought together with the planning and regulation of retail provision at the local, neighbourhood level.
Resumo:
Background: Stereotypically perceived to be an ‘all male’ occupation, engineering has for many years failed to attract high numbers of young women [1,2]. The reasons for this are varied, but tend to focus on misconceptions of the profession as being more suitable for men. In seeking to investigate this issue a participatory research approach was adopted [3] in which two 17 year-old female high school students interviewed twenty high school girls. Questions focused on the girls’ perceptions of engineering as a study and career choice. The findings were recorded and analysed using qualitative techniques. The study identified three distinctive ‘influences’ as being pivotal to girls’ perceptions of engineering; pedagogical; social; and, familial. Pedagogical Influences: Pedagogical influences tended to focus on science and maths. In discussing science, the majority of the girls identified biology and chemistry as more ‘realistic’ whilst physics was perceived to more suitable for boys. The personality of the teacher, and how a particular subject is taught, proved to be important influences shaping opinions. Social Influences: Societal influences were reflected in the girls’ career choice with the majority considering medical or social science related careers. Although all of the girls believed engineering to be ‘male dominated’, none believed that a woman should not be engineer. Familial Influences: Parental influence was identified as key to career and study choice; only two of the girls had discussed engineering with their parents of which only one was being actively encouraged to pursue a career in engineering. Discussion: The study found that one of the most significant barriers to engineering is a lack of awareness. Engineering did not register in the girls’ lives, it was not taught in school, and only one had met a female engineer. Building on the study findings, the discussion considers how engineering could be made more attractive to young women. Whilst misconceptions about what an engineer is need to be addressed, other more fundamental pedagogical barriers, such as the need to make physics more attractive to girls and the need to develop the curriculum so as to meet the learning needs of 21st Century students are discussed. By drawing attention to the issues around gender and the barriers to engineering, this paper contributes to current debates in this area – in doing so it provides food for thought about policy and practice in engineering and engineering education.
Resumo:
Federal transportation legislation in effect since 1991 was examined to determine outcomes in two areas: (1) The effect of organizational and fiscal structures on the implementation of multimodal transportation infrastructure, and (2) The effect of multimodal transportation infrastructure on sustainability. Triangulation of methods was employed through qualitative analysis (including key informant interviews, focus groups and case studies), as well as quantitative analysis (including one-sample t-tests, regression analysis and factor analysis). ^ Four hypotheses were directly tested: (1) Regions with consolidated government structures will build more multimodal transportation miles: The results of the qualitative analysis do not lend support while the results of the quantitative findings support this hypothesis, possibly due to differences in the definitions of agencies/jurisdictions between the two methods. (2) Regions in which more locally dedicated or flexed funding is applied to the transportation system will build a greater number of multimodal transportation miles: Both quantitative and qualitative research clearly support this hypothesis. (3) Cooperation and coordination, or, conversely, competition will determine the number of multimodal transportation miles: Participants tended to agree that cooperation, coordination and leadership are imperative to achieving transportation goals and objectives, including targeted multimodal miles, but also stressed the importance of political and financial elements in determining what ultimately will be funded and implemented. (4) The modal outcomes of transportation systems will affect the overall health of a region in terms of sustainability/quality of life indicators: Both the qualitative and the quantitative analyses provide evidence that they do. ^ This study finds that federal legislation has had an effect on the modal outcomes of transportation infrastructure and that there are links between these modal outcomes and the sustainability of a region. It is recommended that agencies further consider consolidation and strengthen cooperation efforts and that fiscal regulations are modified to reflect the problems cited in qualitative analysis. Limitations of this legislation especially include the inability to measure sustainability; several measures are recommended. ^
Resumo:
We conducted a series of experiments whereby dissolved organic matter (DOM) was leached from various wetland and estuarine plants, namely sawgrass (Cladium jamaicense), spikerush (Eleocharis cellulosa), red mangrove (Rhizophora mangle), cattail (Typha domingensis), periphyton (dry and wet mat), and a seagrass (turtle grass; Thalassia testudinum). All are abundant in the Florida Coastal Everglades (FCE) except for cattail, but this species has a potential to proliferate in this environment. Senescent plant samples were immersed into ultrapure water with and without addition of 0.1% NaN3 (w/ and w/o NaN3, respectively) for 36 days. We replaced the water every 3 days. The amount of dissolved organic carbon (DOC), sugars, and phenols in the leachates were analyzed. The contribution of plant leachates to the ultrafiltered high molecular weight fraction of DOM (>1 kDa; UDOM) in natural waters in the FCE was also investigated. UDOM in plant leachates was obtained by tangential flow ultrafiltration and its carbon and phenolic compound compositions were analyzed using solid state 13C cross-polarization magic angle spinning nuclear magnetic resonance (13C CPMAS NMR) spectroscopy and thermochemolysis in the presence of tetramethylammonium hydroxide (TMAH thermochemolysis), respectively. The maximum yield of DOC leached from plants over the 36-day incubations ranged from 13.0 to 55.2 g C kg−1 dry weight. This amount was lower in w/o NaN3 treatments (more DOC was consumed by microbes than produced) except for periphyton. During the first 2 weeks of the 5 week incubation period, 60–85% of the total amount of DOC was leached, and exponential decay models fit the leaching rates except for periphyton w/o NaN3. Leached DOC (w/ NaN3) contained different concentrations of sugars and phenols depending on the plant types (1.09–7.22 and 0.38–12.4 g C kg−1 dry weight, respectively), and those biomolecules comprised 8–34% and 4–28% of the total DOC, respectively. This result shows that polyphenols that readily leach from senescent plants can be an important source of chromophoric DOM (CDOM) in wetland environments. The O-alkyl C was found to be the major C form (55±9%) of UDOM in plant leachates as determined by 13C CPMAS NMR. The relative abundance of alkyl C and carbonyl C was consistently lower in plant-leached UDOM than that in natural water UDOM in the FCE, which suggests that these constituents increase in relative abundance during diagenetic processing. TMAH thermochemolysis analysis revealed that the phenolic composition was different among the UDOM leached from different plants, and was expected to serve as a source indicator of UDOM in natural water. Polyphenols are, however, very reactive and photosensitive in aquatic environments, and thus may loose their plant-specific molecular characteristics shortly. Our study suggests that variations in vegetative cover across a wetland landscape will affect the quantity and quality of DOM leached into the water, and such differences in DOM characteristics may affect other biogeochemical processes.
Resumo:
Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.