911 resultados para time-to-rehospitalization
Resumo:
Although it has been argued that LMX is a phenomenon that develops over time, the existing LMX literature is largely cross-sectional in nature. Yet, there is a great need for unraveling how LMX develops over time. To address this issue in the LMX literature, we examine the relationships of LMX with two variables known for changing over time: job performance and justice perceptions. On the basis of current empirical findings, a simulation deductively shows that LMX develops over time, but differently in early stages versus more mature stages. Our findings also indicate that performance and justice trends affect LMX. Implications for LMX theory, and for longitudinal research on LMX, performance, and justice are discussed.
Resumo:
It has been established that Wingate-based high-intensity training (HIT) consisting of 4 to 6 x 30-s all-out sprints interspersed with 4-min recovery is an effective training paradigm. Despite the increased utilisation of Wingate-based HIT to bring about training adaptations, the majority of previous studies have been conducted over a relatively short timeframe (2 to 6 weeks). However, activity during recovery period, intervention duration or sprint length have been overlooked. In study 1, the dose response of recovery intensity on performance during typical Wingate-based HIT (4 x 30-s cycle all-out sprints separated by 4-min recovery) was examined and active recovery (cycling at 20 to 40% of V̇O2peak) has been shown to improve sprint performance with successive sprints by 6 to 12% compared to passive recovery (remained still), while increasing aerobic contribution to sprint performance by ~15%. In the following study, 5 to 7% greater endurance performance adaptations were achieved with active recovery (40%V̇O2peak) following 2 weeks of Wingate-based HIT. In the final study, shorter sprint protocol (4 to 6 x 15-s sprints interspersed with 2 min of recovery) has been shown to be as effective as typical 30-s Wingate-based HIT in improving cardiorespiratory function and endurance performance over 9 weeks with the improvements in V̇O2peak being completed within 3 weeks, whereas exercise capacity (time to exhaustion) being increased throughout 9 weeks. In conclusion, the studies demonstrate that active recovery at 40% V̇O2peak significantly enhances endurance adaptations to HIT. Further, the duration of the sprint does not seem to be a driving factor in the magnitude of change with 15 sec sprints providing similar adaptations to 30 sec sprints. Taken together, this suggests that the arrangement of recovery mode should be considered to ensure maximal adaptation to HIT, and the practicality of the training would be enhanced via the reduction in sprint duration without diminishing overall training adaptations.
Resumo:
Purpose The aim of this study was to test the effects of sprint interval training (SIT) on cardiorespiratory fitness and aerobic performance measures in young females. Methods Eight healthy, untrained females (age 21 ± 1 years; height 165 ± 5 cm; body mass 63 ± 6 kg) completed cycling peak oxygen uptake ( V˙O2V˙O2 peak), 10-km cycling time trial (TT) and critical power (CP) tests pre- and post-SIT. SIT protocol included 4 × 30-s “all-out” cycling efforts against 7 % body mass interspersed with 4 min of active recovery performed twice per week for 4 weeks (eight sessions in total). Results There was no significant difference in V˙O2V˙O2 peak following SIT compared to the control period (control period: 31.7 ± 3.0 ml kg−1 min−1; post-SIT: 30.9 ± 4.5 ml kg−1 min−1; p > 0.05), but SIT significantly improved time to exhaustion (TTE) (control period: 710 ± 101 s; post-SIT: 798 ± 127 s; p = 0.00), 10-km cycling TT (control period: 1055 ± 129 s; post-SIT: 997 ± 110 s; p = 0.004) and CP (control period: 1.8 ± 0.3 W kg−1; post-SIT: 2.3 ± 0.6 W kg−1; p = 0.01). Conclusions These results demonstrate that young untrained females are responsive to SIT as measured by TTE, 10-km cycling TT and CP tests. However, eight sessions of SIT over 4 weeks are not enough to provide sufficient training stimulus to increase V˙O2V˙O2 peak.
Resumo:
This dissertation demonstrates an explanation of damage and reliability of critical components and structures within the second law of thermodynamics. The approach relies on the fundamentals of irreversible thermodynamics, specifically the concept of entropy generation due to materials degradation as an index of damage. All failure mechanisms that cause degradation, damage accumulation and ultimate failure share a common feature, namely energy dissipation. Energy dissipation, as a fundamental measure for irreversibility in a thermodynamic treatment of non-equilibrium processes, leads to and can be expressed in terms of entropy generation. The dissertation proposes a theory of damage by relating entropy generation to energy dissipation via generalized thermodynamic forces and thermodynamic fluxes that formally describes the resulting damage. Following the proposed theory of entropic damage, an approach to reliability and integrity characterization based on thermodynamic entropy is discussed. It is shown that the variability in the amount of the thermodynamic-based damage and uncertainties about the parameters of a distribution model describing the variability, leads to a more consistent and broader definition of the well know time-to-failure distribution in reliability engineering. As such it has been shown that the reliability function can be derived from the thermodynamic laws rather than estimated from the observed failure histories. Furthermore, using the superior advantages of the use of entropy generation and accumulation as a damage index in comparison to common observable markers of damage such as crack size, a method is proposed to explain the prognostics and health management (PHM) in terms of the entropic damage. The proposed entropic-based damage theory to reliability and integrity is then demonstrated through experimental validation. Using this theorem, the corrosion-fatigue entropy generation function is derived, evaluated and employed for structural integrity, reliability assessment and remaining useful life (RUL) prediction of Aluminum 7075-T651 specimens tested.
Resumo:
Introduction: Formaldehyde is a compound with a wide range and is commonly used in anatomy and pathology laboratories. At room temperature is quickly volatilized to a pungent and suffocating gas and its inhalation has been correlated to nuclear alterations in different tissues. We aimed to investigate whether exposure to this compound was correlated with the appearance of cytotoxic and genotoxic features in the nasal epithelial cells of students enrolled in a human anatomy course. Material and Methods: This prospective study collected periodically nasal cells from mucosa of 17 volunteers from two different undergraduate programs with different workloads of practical lessons in an anatomy laboratory, 30 and 90 hours per semester. Cells were staining according to Feulgen method and nuclear morphology was analyzed to detect possible damage. Dunn's post hoc test was used in the statistical analysis. Pearson's correlation was performed for gender, age and questionnaire responses. Results: Epithelial cells showed indicators of cytotoxicity and mutagenicity. Students with a more extensive workload in anatomy laboratory displayed a more severe profile with an increase in karyorrhexis (p < 0.05) over time. The micronucleus analysis showed difference between first and second collection (p < 0.01), although it was not maintained over the time. Students with a less extensive workload display no differences in most of cytological features. Despite karyorrhexis was present in a greater number of cells, for this group no significant difference was observed between any range. The same was observed to karyolysis and micronucleus (p > 0.05). Conclusion: Individuals exposed for short periods of time to formaldehyde are subject to the toxic action of this gas. Karyorrhexis was the most frequently observed cytotoxic feature and micronucleus showed an increase between the first time point. The patterns observed between the student's groups suggest a negative effect due to exposure time.
Resumo:
Background: This paper describes the results of a feasibility study for a randomised controlled trial (RCT). Methods: Twenty-nine members of the UK Dermatology Clinical Trials Network (UK DCTN) expressed an interest in recruiting for this study. Of these, 17 obtained full ethics and Research & Development (R&D) approval, and 15 successfully recruited patients into the study. A total of 70 participants with a diagnosis of cellulitis of the leg were enrolled over a 5-month period. These participants were largely recruited from medical admissions wards, although some were identified from dermatology, orthopaedic, geriatric and general surgery wards. Data were collected on patient demographics, clinical features and willingness to take part in a future RCT. Results: Despite being a relatively common condition, cellulitis patients were difficult to locate through our network of UK DCTN clinicians. This was largely because patients were rarely seen by dermatologists, and admissions were not co-ordinated centrally. In addition, the impact of the proposed exclusion criteria was high; only 26 (37%) of those enrolled in the study fulfilled all of the inclusion criteria for the subsequent RCT, and were willing to be randomised to treatment. Of the 70 participants identified during the study as having cellulitis of the leg (as confirmed by a dermatologist), only 59 (84%) had all 3 of the defining features of: i) erythema, ii) oedema, and iii) warmth with acute pain/tenderness upon examination. Twenty-two (32%) patients experienced a previous episode of cellulitis within the last 3 years. The median time to recurrence (estimated as the time since the most recent previous attack) was 205 days (95% CI 102 to 308). Service users were generally supportive of the trial, although several expressed concerns about taking antibiotics for lengthy periods, and felt that multiple morbidity/old age would limit entry into a 3-year study. Conclusion: This pilot study has been crucial in highlighting some key issues for the conduct of a future RCT. As a result of these findings, changes have been made to i) the planned recruitment strategy, ii) the proposed inclusion criteria and ii) the definition of cellulitis for use in the future trial.
Resumo:
Background: Intrathecal adjuvants are added to local anaesthetics to improve the quality of neuraxial blockade and prolong the duration of analgesia during spinal anaesthesia. Used intrathecally, fentanyl improves the quality of spinal blockade as compared to plain bupivacaine and confers a short duration of post-operative analgesia. Intrathecal midazolam as an adjuvant has been used and shown to improve the quality of spinal anaesthesia and prolong the duration of post-operative analgesia. No studies have been done comparing intrathecal fentanyl with bupivacaine and intrathecal 2 mg midazolam with bupivacaine. Objective: To compare the effect of intrathecal 2 mg midazolam to intrathecal 20 micrograms fentanyl when added to 2.6 ml of 0.5% hyperbaric bupivacaine, on post-operative pain, in patients undergoing lower limb orthopaedic surgery under spinal anaesthesia. Methods: A total of 40 patients undergoing lower limb orthopaedic surgery under spinal anaesthesia were randomized to two groups. Group 1: 2.6mls 0.5% hyperbaric bupivacaine with 0.4mls (20micrograms) fentanyl Group 2: 2.6mls of 0.5% hyperbaric bupivacaine with 0.4mls (2mg) midazolam Results: The duration of effective analgesia was longer in the midazolam group (384.05 minutes) as compared to the fentanyl group (342.6 minutes). There was no significant difference (P 0.4047). The time to onset was significantly longer in midazolam group 17.1 minutes as compared to the fentanyl group 13.2 minutes (P 0.023). The visual analogue score at rescue was significantly lower in the midazolam group (5.55) as compared to the fentanyl group 6.35 (P - 0.043). Conclusion: On the basis of the results of this study, there was no significant difference in the duration of effective analgesia between adjuvant intrathecal 2 mg midazolam as compared to intrathecal 20 micrograms fentanyl for patients undergoing lower limb orthopaedic surgery.
Resumo:
The high rate of teacher attrition in urban schools is well documented. While this does not seem like a problem in Carter County, this equates to hundreds of teachers that need to be replaced annually. Since school year (SY) 2007-08, Carter County has lost over 7,100 teachers, approximately half of (50.1%) of whom resigned, often going to neighboring, higher-paying jurisdictions as suggested by exit survey data (SY2016-2020 Strategic Plan). Included in this study is a range of practices principals use to retain teachers. While the role of the principal is recognized as a critical element in teacher retention, few studies explore the specific practices principals implement to retain teachers and how they use their time to accomplish this task. Through interviews, observations, document analysis and reflective notes, the study identifies the practices four elementary school principals of high and relatively low attrition schools use to support teacher retention. In doing so, the study uses a qualitative cross-case analysis approach. The researcher examined the following leadership practices of the principal and their impact on teacher retention: (a) providing leadership, (b) supporting new teachers, (c) training and mentoring teaching staff, (d) creating opportunities for collaboration, (d) creating a positive school climate, and (e) promoting teacher autonomy. The following research questions served as a foundational guide for the development and implementation of this study: 1. How do principals prioritize addressing teacher attrition or retention relative to all of their other responsibilities? How do they allocate their time to this challenge? 2. What do principals in schools with low attrition rates do to promote retention that principals in high attrition schools do not? What specific practices or interventions are principals in these two types of schools utilizing to retain teachers? Is there evidence to support their use of the practices? The findings that emerge from the data revealed the various practices principals use to influence and support teachers do not differ between the four schools.
Resumo:
New generation embedded systems demand high performance, efficiency and flexibility. Reconfigurable hardware can provide all these features. However the costly reconfiguration process and the lack of management support have prevented a broader use of these resources. To solve these issues we have developed a scheduler that deals with task-graphs at run-time, steering its execution in the reconfigurable resources while carrying out both prefetch and replacement techniques that cooperate to hide most of the reconfiguration delays. In our scheduling environment task-graphs are analyzed at design-time to extract useful information. This information is used at run-time to obtain near-optimal schedules, escaping from local-optimum decisions, while only carrying out simple computations. Moreover, we have developed a hardware implementation of the scheduler that applies all the optimization techniques while introducing a delay of only a few clock cycles. In the experiments our scheduler clearly outperforms conventional run-time schedulers based on As-Soon-As-Possible techniques. In addition, our replacement policy, specially designed for reconfigurable systems, achieves almost optimal results both regarding reuse and performance.
Resumo:
Part 16: Performance Measurement Systems
Resumo:
Over the last few years, football entered in a period of accelerated access to large amount of match analysis data. Social networks have been adopted to reveal the structure and organization of the web of interactions, such as the players passing distribution tendencies. In this study we investigated the influence of ball possession characteristics in the competitive success of Spanish La Liga teams. The sample was composed by OPTA passing distribution raw data (n=269,055 passes) obtained from 380 matches involving all the 20 teams of the 2012/2013 season. Then, we generated 760 adjacency matrixes and their corresponding social networks using Node XL software. For each network we calculated three team performance measures to evaluate ball possession tendencies: graph density, average clustering and passing intensity. Three levels of competitive success were determined using two-step cluster analysis based on two input variables: the total points scored by each team and the scored per conceded goals ratio. Our analyses revealed significant differences between competitive performances on all the three team performance measures (p < .001). Bottom-ranked teams had less number of connected players (graph density) and triangulations (average clustering) than intermediate and top-ranked teams. However, all the three clusters diverged in terms of passing intensity, with top-ranked teams having higher number of passes per possession time, than intermediate and bottom-ranked teams. Finally, similarities and dissimilarities in team signatures of play between the 20 teams were displayed using Cohen’s effect size. In sum, findings suggest the competitive performance was influenced by the density and connectivity of the teams, mainly due to the way teams use their possession time to give intensity to their game.
Resumo:
One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.
The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.
We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.
The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.
To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.
A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.
One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.
Resumo:
World War II profoundly impacted Florida. The military geography of the State is essential to an understanding the war. The geostrategic concerns of place and space determined that Florida would become a statewide military base. Florida’s attributes of place such as climate and topography determined its use as a military academy hosting over two million soldiers, nearly 15 percent of the GI Army, the largest force theUS ever raised. One-in-eight Floridians went into uniform. Equally,Florida’s space on the planet made it central for both defensive and offensive strategies. The Second World War was a war of movement, and Florida was a major jump off point forUSforce projection world-wide, especially of air power. Florida’s demography facilitated its use as a base camp for the assembly and engagement of this military power. In 1940, less than two percent of the US population lived in Florida, a quiet, barely populated backwater of the United States.[1] But owing to its critical place and space, over the next few years it became a 65,000 square mile training ground, supply dump, and embarkation site vital to the US war effort. Because of its place astride some of the most important sea lanes in the Atlantic World,Florida was the scene of one of the few Western Hemisphere battles of the war. The militarization ofFloridabegan long before Pearl Harbor. The pre-war buildup conformed to theUSstrategy of the war. The strategy of theUS was then (and remains today) one of forward defense: harden the frontier, then take the battle to the enemy, rather than fight them inNorth America. The policy of “Europe First,” focused the main US war effort on the defeat of Hitler’sGermany, evaluated to be the most dangerous enemy. In Florida were established the military forces requiring the longest time to develop, and most needed to defeat the Axis. Those were a naval aviation force for sea-borne hostilities, a heavy bombing force for reducing enemy industrial states, and an aerial logistics train for overseas supply of expeditionary campaigns. The unique Florida coastline made possible the seaborne invasion training demanded for USvictory. The civilian population was employed assembling mass-produced first-generation container ships, while Floridahosted casualties, Prisoners-of-War, and transient personnel moving between the Atlantic and Pacific. By the end of hostilities and the lifting of Unlimited Emergency, officially on December 31, 1946, Floridahad become a transportation nexus. Florida accommodated a return of demobilized soldiers, a migration of displaced persons, and evolved into a modern veterans’ colonia. It was instrumental in fashioning the modern US military, while remaining a center of the active National Defense establishment. Those are the themes of this work. [1] US Census of Florida 1940. Table 4 – Race, By Nativity and Sex, For the State. 14.
Resumo:
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
Resumo:
Maintenance of transport infrastructure assets is widely advocated as the key in minimizing current and future costs of the transportation network. While effective maintenance decisions are often a result of engineering skills and practical knowledge, efficient decisions must also account for the net result over an asset's life-cycle. One essential aspect in the long term perspective of transport infrastructure maintenance is to proactively estimate maintenance needs. In dealing with immediate maintenance actions, support tools that can prioritize potential maintenance candidates are important to obtain an efficient maintenance strategy. This dissertation consists of five individual research papers presenting a microdata analysis approach to transport infrastructure maintenance. Microdata analysis is a multidisciplinary field in which large quantities of data is collected, analyzed, and interpreted to improve decision-making. Increased access to transport infrastructure data enables a deeper understanding of causal effects and a possibility to make predictions of future outcomes. The microdata analysis approach covers the complete process from data collection to actual decisions and is therefore well suited for the task of improving efficiency in transport infrastructure maintenance. Statistical modeling was the selected analysis method in this dissertation and provided solutions to the different problems presented in each of the five papers. In Paper I, a time-to-event model was used to estimate remaining road pavement lifetimes in Sweden. In Paper II, an extension of the model in Paper I assessed the impact of latent variables on road lifetimes; displaying the sections in a road network that are weaker due to e.g. subsoil conditions or undetected heavy traffic. The study in Paper III incorporated a probabilistic parametric distribution as a representation of road lifetimes into an equation for the marginal cost of road wear. Differentiated road wear marginal costs for heavy and light vehicles are an important information basis for decisions regarding vehicle miles traveled (VMT) taxation policies. In Paper IV, a distribution based clustering method was used to distinguish between road segments that are deteriorating and road segments that have a stationary road condition. Within railway networks, temporary speed restrictions are often imposed because of maintenance and must be addressed in order to keep punctuality. The study in Paper V evaluated the empirical effect on running time of speed restrictions on a Norwegian railway line using a generalized linear mixed model.