982 resultados para Motor unit


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In order to design appropriate environments for performance and learning of movement skills, physical educators need a sound theoretical model of the learner and of processes of learning. In physical education, this type of modelling informs the organization of learning environments and effective and efficient use of practice time. An emerging theoretical framework in motor learning, relevant to physical education, advocates a constraints-led perspective for acquisition of movement skills and game play knowledge. This framework shows how physical educators could use task, performer and environmental constraints to channel acquisition of movement skills and decision making behaviours in learners. From this viewpoint, learners generate specific movement solutions to satisfy the unique combination of constraints imposed on them, a process which can be harnessed during physical education lessons. Purpose: In this paper the aim is to provide an overview of the motor learning approach emanating from the constraints-led perspective, and examine how it can substantiate a platform for a new pedagogical framework in physical education: nonlinear pedagogy. We aim to demonstrate that it is only through theoretically valid and objective empirical work of an applied nature that a conceptually sound nonlinear pedagogy model can continue to evolve and support research in physical education. We present some important implications for designing practices in games lessons, showing how a constraints-led perspective on motor learning could assist physical educators in understanding how to structure learning experiences for learners at different stages, with specific focus on understanding the design of games teaching programmes in physical education, using exemplars from Rugby Union and Cricket. Findings: Research evidence from recent studies examining movement models demonstrates that physical education teachers need a strong understanding of sport performance so that task constraints can be manipulated so that information-movement couplings are maintained in a learning environment that is representative of real performance situations. Physical educators should also understand that movement variability may not necessarily be detrimental to learning and could be an important phenomenon prior to the acquisition of a stable and functional movement pattern. We highlight how the nonlinear pedagogical approach is student-centred and empowers individuals to become active learners via a more hands-off approach to learning. Summary: A constraints-based perspective has the potential to provide physical educators with a framework for understanding how performer, task and environmental constraints shape each individual‟s physical education. Understanding the underlying neurobiological processes present in a constraints-led perspective to skill acquisition and game play can raise awareness of physical educators that teaching is a dynamic 'art' interwoven with the 'science' of motor learning theories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research examining post-trauma pathology indicates negative outcomes can differ as a function of the type of trauma experienced. Such research has yet to be published when looking at positive post-trauma changes. Ninety-Four survivors of trauma, forming three groups, completed the Posttraumatic Growth Inventory (PTGI) and Impact of Events Scale-Revised (IES-R). Groups comprised survivors of i) sexual abuse ii) motor vehicle accidents iii) bereavement. Results indicted differences in growth between the groups with the bereaved reporting higher levels of growth than other survivors and sexual abuse survivors demonstrated higher levels of PTSD symptoms than the other groups. However, this did not preclude sexual abuse survivors from also reporting moderate levels of growth. Results are discussed with relation to fostering growth through clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims: To describe a local data linkage project to match hospital data with the Australian Institute of Health and Welfare (AIHW) National Death Index (NDI) to assess longterm outcomes of intensive care unit patients. Methods: Data were obtained from hospital intensive care and cardiac surgery databases on all patients aged 18 years and over admitted to either of two intensive care units at a tertiary-referral hospital between 1 January 1994 and 31 December 2005. Date of death was obtained from the AIHW NDI by probabilistic software matching, in addition to manual checking through hospital databases and other sources. Survival was calculated from time of ICU admission, with a censoring date of 14 February 2007. Data for patients with multiple hospital admissions requiring intensive care were analysed only from the first admission. Summary and descriptive statistics were used for preliminary data analysis. Kaplan-Meier survival analysis was used to analyse factors determining long-term survival. Results: During the study period, 21 415 unique patients had 22 552 hospital admissions that included an ICU admission; 19 058 surgical procedures were performed with a total of 20 092 ICU admissions. There were 4936 deaths. Median follow-up was 6.2 years, totalling 134 203 patient years. The casemix was predominantly cardiac surgery (80%), followed by cardiac medical (6%), and other medical (4%). The unadjusted survival at 1, 5 and 10 years was 97%, 84% and 70%, respectively. The 1-year survival ranged from 97% for cardiac surgery to 36% for cardiac arrest. An APACHE II score was available for 16 877 patients. In those discharged alive from hospital, the 1, 5 and 10-year survival varied with discharge location. Conclusions: ICU-based linkage projects are feasible to determine long-term outcomes of ICU patients

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic: There is increasing recognition that the organizational configurations of corporate venture units should depend on the types of ventures the unit seeks to develop (Burgelman, 1984; Hill and Birkinshaw, 2008). Distinction have been made between internal and external as well as exploitative versus explorative ventures (Hill and Birkinshaw, 2008; Narayan et al., 2009; Schildt et al., 2005). Assuming that firms do not want to limit themselves to a single type of venture, but rather employ a portfolio of ventures, the logical consequence is that firms should employ multiple corporate venture units. Each venture unit tailor-made for the type of venture it seeks to develop. Surprisingly, there is limited attention in the literature for the challenges of managing multiple corporate venture units in a single firm. Maintaining multiple venture units within one firm provides easier access to funding for new ideas (Hamel, 1999). It allows for freedom and flexibility to tie the organizational systems (Rice et al., 2000), autonomy (Hill and Rothaermel, 2003), and involvement of management (Day, 1994; Wadwha and Kotha, 2006) to the requirements of the individual ventures. Yet, the strategic objectives of a venture may change when uncertainty around the venture is resolved (Burgelman, 1984). For example, firms may decide to spin-in external ventures (Chesbrough, 2002) or spun-out ventures that prove strategically unimportant (Burgelman, 1984). This suggests that ventures might need to be transferred between venture units, e.g. from a more internally-driven corporate venture division to a corporate venture capital unit. Several studies suggested that ventures require different managerial skills across their phase of development (Desouza et al., 2007; O'Connor and Ayers, 2005; Kazanjian and Drazin, 1990; Westerman et al., 2006). To facilitate effective transfer between venture units and manage the overall venturing process, it is important that firms set up and manage integrative linkages. Integrative linkages provide synergies and coordination between differentiated units (Lawrence and Lorsch, 1967). Prior findings pointed to the important role of senior management (Westerman et al., 2006; Gilbert, 2006) and a shared organizational vision (Burgers et al., 2009) to coordinate venture units with mainstream businesses. We will draw on these literatures to investigate the key question of how to integratively manage multiple venture units. ---------- Methodology/Key Propositions: In order to seek an answer to the research question, we employ a case study approach that provides unique insights into how firms can break up their venturing process. We selected three Fortune 500 companies that employ multiple venturing units, IBM, Royal Dutch/ Shell and Nokia, and investigated and compared their approaches. It was important that the case companies somewhat differed in the type of venture units they employed as well as the way they integrate and coordinate their venture units. The data are based on extensive interviews and a variety of internal and external company documents to triangulate our findings (Eisenhardt, 1989). The key proposition of the article is that firms can best manage their multiple venture units through an ambidextrous design of loosely coupled units. This provides venture units with sufficient flexibility to employ organizational configurations that best support the type of venture they seek to develop, as well as provides sufficient integration to facilitate smooth transfer of ventures between venture units. Based on the case findings, we develop a generic framework for a new way of managing the venturing process through multiple corporate venture units. ---------- Results and Implications: One of our main findings is that these firms tend to organize their venture units according to phases in the venture development process. That is, they tend to have venture units aimed at incubation of venture ideas as well as units aimed more at the commercialization of ventures into a new business unit for the firm or a start-up. The companies in our case studies tended to coordinate venture units through integrative management skills or a coordinative venture unit that spanned multiple phases. We believe this paper makes two significant contributions. First, we extend prior venturing literature by addressing how firms manage a portfolio of venture units, each achieving different strategic objectives. Second, our framework provides recommendations on how firms should manage such an approach towards venturing. This helps to increase the likelihood of success of their venturing programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several brain imaging studies have assumed that response conflict is present in Stroop tasks. However, this has not been demonstrated directly. We examined the time-course of stimulus and response conflict resolution in a numerical Stroop task by combining single-trial electro-myography (EMG) and event-related brain potentials (ERP). EMG enabled the direct tracking of response conflict and the peak latency of the P300 ERP wave was used to index stimulus conflict. In correctly responded trials of the incongruent condition EMG detected robust incorrect response hand activation which appeared consistently in single trials. In 50–80% of the trials correct and incorrect response hand activation coincided temporally, while in 20–50% of the trials incorrect hand activation preceded correct hand activation. EMG data provides robust direct evidence for response conflict. However, congruency effects also appeared in the peak latency of the P300 wave which suggests that stimulus conflict also played a role in the Stroop paradigm. Findings are explained by the continuous flow model of information processing: Partially processed task-irrelevant stimulus information can result in stimulus conflict and can prepare incorrect response activity. A robust congruency effect appeared in the amplitude of incongruent vs. congruent ERPs between 330–400 ms, this effect may be related to the activity of the anterior cingulate cortex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates the control of a HVDC link, fed from an AC source through a controlled rectifier and feeding an AC line through a controlled inverter. The overall objective is to maintain maximum possible link voltage at the inverter while regulating the link current. In this paper the practical feedback design issues are investigated with a view of obtaining simple, robust designs that are easy to evaluate for safety and operability. The investigations are applicable to back-to-back links used for frequency decoupling and to long DC lines. The design issues discussed include: (i) a review of overall system dynamics to establish the time scale of different feedback loops and to highlight feedback design issues; (ii) the concept of using the inverter firing angle control to regulate link current when the rectifier firing angle controller saturates; and (iii) the design issues for the individual controllers including robust design for varying line conditions and the trade-off between controller complexity and the reduction of nonlinearity and disturbance effects

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurements in the exhaust plume of a petrol-driven motor car showed that molecular cluster ions of both signs were present in approximately equal amounts. The emission rate increased sharply with engine speed while the charge symmetry remained unchanged. Measurements at the kerbside of nine motorways and five city roads showed that the mean total cluster ion concentration near city roads (603 cm-3) was about one-half of that near motorways (1211 cm-3) and about twice as high as that in the urban background (269 cm-3). Both positive and negative ion concentrations near a motorway showed a significant linear increase with traffic density (R2=0.3 at p<0.05) and correlated well with each other in real time (R2=0.87 at p<0.01). Heavy duty diesel vehicles comprised the main source of ions near busy roads. Measurements were conducted as a function of downwind distance from two motorways carrying around 120-150 vehicles per minute. Total traffic-related cluster ion concentrations decreased rapidly with distance, falling by one-half from the closest approach of 2m to 5m of the kerb. Measured concentrations decreased to background at about 15m from the kerb when the wind speed was 1.3 m s-1, this distance being greater at higher wind speed. The number and net charge concentrations of aerosol particles were also measured. Unlike particles that were carried downwind to distances of a few hundred metres, cluster ions emitted by motor vehicles were not present at more than a few tens of metres from the road.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background/objectives The provision of the patient bed-bath is a fundamental nursing care activity yet few quantitative data and no qualitative data are available on registered nurses’ (RNs) clinical practice in this domain in the intensive care unit (ICU). The aim of this study was to describe ICU RNs current practice with respect to the timing, frequency and duration of the patient bed-bath and the cleansing and emollient agents used. Methods The study utilised a two-phase sequential explanatory mixed method design. Phase one used a questionnaire to survey RNs and phase two employed semi-structured focus group (FG) interviews with RNs. Data was collected over 28 days across four Australian metropolitan ICUs. Ethical approval was granted from the relevant hospital and university human research ethics committees. RNs were asked to complete a questionnaire following each episode of care (i.e. bed-bath) and then to attend one of three FG interviews: RNs with less than 2 years ICU experience; RNs with 2–5 years ICU experience; and RNs with greater than 5 years ICU experience. Results During the 28-day study period the four ICUs had 77.25 beds open. In phase one a total of 539 questionnaires were returned, representing 30.5% of episodes of patient bed-baths (based on 1767 bed occupancy and one bed-bath per patient per day). In 349 bed-bath episodes 54.7% patients were mechanically ventilated. The bed-bath was given between 02.00 and 06.00 h in 161 episodes (30%), took 15–30 min to complete (n = 195, 36.2%) and was completed within the last 8 h in 304 episodes (56.8%). Cleansing agents used were predominantly pH balanced soap or liquid soap and water (n = 379, 71%) in comparison to chlorhexidine impregnated sponges/cloths (n = 86, 16.1%) or other agents such as pre-packaged washcloths (n = 65, 12.2%). In 347 episodes (64.4%) emollients were not applied after the bed-bath. In phase two 12 FGs were conducted (three FGs at each ICU) with a total of 42 RN participants. Thematic analysis of FG transcripts across the three levels of RN ICU experience highlighted a transition of patient hygiene practice philosophy from shades of grey – falling in line for inexperienced clinicians to experienced clinicians concrete beliefs about patient bed-bath needs. Conclusions This study identified variation in process and products used in patient hygiene practices in four ICUs. Further study to improve patient outcomes is required to determine the appropriate timing of patient hygiene activities and cleansing agents used to improve skin integrity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research investigating the transactional approach to the work stressor-employee adjustment relationship has described many negative main effects between perceived stressors in the workplace and employee outcomes. A considerable amount of literature, theoretical and empirical, also describes potential moderators of this relationship. Organizational identification has been established as a significant predictor of employee job-related attitudes. To date, research has neglected investigation of the potential moderating effect of organizational identification in the work stressor-employee adjustment relationship. On the basis of identity, subjective fit and sense of belonging literature it was predicted that higher perceptions of identification at multiple levels of the organization would mitigate the negative effect of work stressors on employee adjustment. It was expected, further, that more proximal, lower order identifications would be more prevalent and potent as buffers of stressors on strain. Predictions were tested with an employee sample from five organizations (N = 267). Hierarchical moderated multiple regression analyses revealed some support for the stress-buffering effects of identification in the prediction of job satisfaction and organizational commitment, particularly for more proximal (i.e., work unit) identification. These positive stress-buffering effects, however, were present for low identifiers in some situations. The present study represents an extension of the application of organizational identity theory by identifying the effects of organizational and workgroup identification on employee outcomes in the nonprofit context. Our findings will contribute to a better understanding of the dynamics in nonprofit organizations and therefore contribute to the development of strategy and interventions to deal with identity-based issues in nonprofits.