908 resultados para Output performances
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.
Resumo:
The purpose of the present study was to examine the influence of 3 different high-intensity interval training regimens on the first and second ventilatory thresholds (VT1 and VT2), anaerobic capacity (ANC), and plasma volume (PV) in well-trained endurance cyclists. Before and after 2 and 4 weeks of training, 38 well-trained cyclists (VO2peak = 64.5 +/- 5.2 ml[middle dot]kg-1[middle dot]min-1) performed (a) a progressive cycle test to measure VO2peak, peak power output (PPO), VT1, and VT2; (b) a time to exhaustion test (Tmax) at their VO2peak power output (Pmax); and (c) a 40-km time-trial (TT40). Subjects were assigned to 1 of 4 training groups (group 1: n = 8, 8 3 60% Tmax at Pmax, 1:2 work-recovery ratio; group 2: n = 9, 8 x 60% Tmax at Pmax, recovery at 65% maximum heart rate; group 3: n = 10, 12 x 30 seconds at 175% PPO, 4.5-minute recovery; control group: n = 11). The TT40 performance, VO2peak, VT1,VT2, and ANC were all significantly increased in groups 1, 2, and 3 (p < 0.05) but not in the control group. However, PV did not change in response to the 4-week training program. Changes in TT40 performance were modestly related to the changes in VO2peak, VT1, VT2, and ANC (r = 0.41, 0.34, 0.42, and 0.40, respectively; all p < 0.05). In conclusion, the improvements in TT40 performance were related to significant increases in VO2peak, VT1,VT2, and ANC but were not accompanied by significant changes in PV. Thus, peripheral adaptations rather than central adaptations are likely responsible for the improved performances witnessed in well-trained endurance athletes following various forms of high-intensity interval training programs.
Resumo:
Objectives: To investigate the frequency characteristics of the ground reaction force (GRF) recorded throughout the eccentric Achilles tendon rehabilitation programme described by Alfredson. Design: Controlled laboratory study, longitudinal. Methods: Nine healthy adult males performed six sets (15 repetitions per set) of eccentric ankle exercise. Ground reaction force was recorded throughout the exercise protocol. For each exercise repetition the frequency power spectrum of the resultant ground reaction force was calculated and normalised to total power. The magnitude of peak relative power within the 8-12 Hz bandwidth and the frequency at which this peak occurred was determined. Results: The magnitude of peak relative power within the 8-12 Hz bandwidth increased with each successive exercise set and following the 4th set (60 repetitions) of exercise the frequency at which peak relative power occurred shifted from 9 to 10 Hz. Conclusions: The increase in magnitude and frequency of ground reaction force vibrations with an increasing number of exercise repetitions is likely connected to changes in muscle activation with fatigue and tendon conditioning. This research illustrates the potential for the number of exercise repetitions performed to influence the tendons' mechanical environment, with implications for tendon remodelling and the clinical efficacy of eccentric rehabilitation programmes for Achilles tendinopathy.
Resumo:
Airborne particles have been shown to be associated with a wide range of adverse health effects, which has led to a recent increase in medical research to gain better insight into their health effects. However, accurate evaluation of the exposure-dose-response relationship is highly dependent on the ability to track actual exposure levels of people to airborne particles. This is quite a complex task, particularly in relation to submicrometer and ultrafine particles, which can vary quite significantly in terms of particle surface area and number concentrations. Therefore, suitable monitors that can be worn for measuring personal exposure to these particles are needed. This paper presents an evaluation of the metrological performance of six diffusion charger sensors, NanoTracer (Philips Aerasense) monitors, when measuring particle number and surface area concentrations, as well as particle number distribution mean when compared to reference instruments. Tests in the laboratory (by generating monodisperse and polydisperse aerosols) and in the field (using natural ambient particles) were designed to evaluate the response of these devices under both steady-state and dynamics conditions. Results showed that the NanoTracers performed well when measuring steady state aerosols, however they strongly underestimated actual concentrations during dynamic response testing. The field experiments also showed that, when the majority of the particles were smaller than 20 nm, which occurs during particle formation events in the atmosphere, the NanoTracer underestimated number concentration quite significantly. Even though the NanoTracer can be used for personal monitoring of exposure to ultrafine particles, it also has limitations which need to be considered in order to provide meaningful results.
Resumo:
Purpose Commencing selected workouts with low muscle glycogen availability augments several markers of training adaptation compared with undertaking the same sessions with normal glycogen content. However, low glycogen availability reduces the capacity to perform high-intensity (>85% of peak aerobic power (V·O2peak)) endurance exercise. We determined whether a low dose of caffeine could partially rescue the reduction in maximal self-selected power output observed when individuals commenced high-intensity interval training with low (LOW) compared with normal (NORM) glycogen availability. Methods Twelve endurance-trained cyclists/triathletes performed four experimental trials using a double-blind Latin square design. Muscle glycogen content was manipulated via exercise–diet interventions so that two experimental trials were commenced with LOW and two with NORM muscle glycogen availability. Sixty minutes before an experimental trial, subjects ingested a capsule containing anhydrous caffeine (CAFF, 3 mg-1·kg-1 body mass) or placebo (PLBO). Instantaneous power output was measured throughout high-intensity interval training (8 × 5-min bouts at maximum self-selected intensity with 1-min recovery). Results There were significant main effects for both preexercise glycogen content and caffeine ingestion on power output. LOW reduced power output by approximately 8% compared with NORM (P < 0.01), whereas caffeine increased power output by 2.8% and 3.5% for NORM and LOW, respectively, (P < 0.01). Conclusion We conclude that caffeine enhanced power output independently of muscle glycogen concentration but could not fully restore power output to levels commensurate with that when subjects commenced exercise with normal glycogen availability. However, the reported increase in power output does provide a likely performance benefit and may provide a means to further enhance the already augmented training response observed when selected sessions are commenced with reduced muscle glycogen availability. It has long been known that endurance training induces a multitude of metabolic and morphological adaptations that improve the resistance of the trained musculature to fatigue and enhance endurance capacity and/or exercise performance (13). Accumulating evidence now suggests that many of these adaptations can be modified by nutrient availability (9–11,21). Growing evidence suggests that training with reduced muscle glycogen using a “train twice every second day” compared with a more traditional “train once daily” approach can enhance the acute training response (29) and markers representative of endurance training adaptation after short-term (3–10 wk) training interventions (8,16,30). Of note is that the superior training adaptation in these previous studies was attained despite a reduction in maximal self-selected power output (16,30). The most obvious factor underlying the reduced intensity during a second training bout is the reduction in muscle glycogen availability. However, there is also the possibility that other metabolic and/or neural factors may be responsible for the power drop-off observed when two exercise bouts are performed in close proximity. Regardless of the precise mechanism(s), there remains the intriguing possibility that the magnitude of training adaptation previously reported in the face of a reduced training intensity (Hulston et al. (16) and Yeo et al.) might be further augmented, and/or other aspects of the training stimulus better preserved, if power output was not compromised. Caffeine ingestion is a possible strategy that might “rescue” the aforementioned reduction in power output that occurs when individuals commence high-intensity interval training (HIT) with low compared with normal glycogen availability. Recent evidence suggests that, at least in endurance-based events, the maximal benefits of caffeine are seen at small to moderate (2–3 mg·kg-1 body mass (BM)) doses (for reviews, see Refs. (3,24)). Accordingly, in this study, we aimed to determine the effect of a low dose of caffeine (3 mg·kg-1 BM) on maximal self-selected power output during HIT commenced with either normal (NORM) or low (LOW) muscle glycogen availability. We hypothesized that even under conditions of low glycogen availability, caffeine would increase maximal self-selected power output and thereby partially rescue the reduction in training intensity observed when individuals commence HIT with low glycogen availability.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research examining effects of uncertainties of generic WSN platform and verifying the capability of SHM-oriented WSNs, particularly on demanding SHM applications like modal analysis and damage identification of real civil structures. This article first reviews the major technical uncertainties of both generic and SHM-oriented WSN platforms and efforts of SHM research community to cope with them. Then, effects of the most inherent WSN uncertainty on the first level of a common Output-only Modal-based Damage Identification (OMDI) approach are intensively investigated. Experimental accelerations collected by a wired sensory system on a benchmark civil structure are initially used as clean data before being contaminated with different levels of data pollutants to simulate practical uncertainties in both WSN platforms. Statistical analyses are comprehensively employed in order to uncover the distribution pattern of the uncertainty influence on the OMDI approach. The result of this research shows that uncertainties of generic WSNs can cause serious impact for level 1 OMDI methods utilizing mode shapes. It also proves that SHM-WSN can substantially lessen the impact and obtain truly structural information without having used costly computation solutions.
Resumo:
BACKGROUND: Diabetes in South Asia represents a different disease entity in terms of its onset, progression, and complications. In the present study, we systematically analyzed the medical research output on diabetes in South Asia. METHODS: The online SciVerse Scopus database was searched using the search terms "diabetes" and "diabetes mellitus" in the article Title, Abstract or Keywords fields, in conjunction with the names of each regional country in the Author Affiliation field. RESULTS: In total, 8478 research articles were identified. Most were from India (85.1%) and Pakistan (9.6%) and the contribution to the global diabetes research output was 2.1%. Publications from South Asia increased markedly after 2007, with 58.7% of papers published between 2000 and 2010 being published after 2007. Most papers were Research Articles (75.9%) and Reviews (12.9%), with only 90 (1.1%) clinical trials. Publications predominantly appeared in local national journals. Indian authors and institutions had the most number of articles and the highest h-index. There were 136 (1.6%) intraregional collaborative studies. Only 39 articles (0.46%) had >100 citations. CONCLUSIONS: Regional research output on diabetes mellitus is unsatisfactory, with only a minimal contribution to global diabetes research. Publications are not highly cited and only a few randomized controlled trials have been performed. In the coming decades, scientists in the region must collaborate and focus on practical and culturally acceptable interventional studies on diabetes mellitus.
Resumo:
The producer has for many years been a central agent in recording studio sessions; the validation of this role was, in many ways, related to the producer’s physical presence in the studio, to a greater or lesser extent. However, improvements in the speed of digital networks have allowed studio sessions to be produced long-distance, in real-time, through communication programs such as Skype or REDIS. How does this impact on the role of the producer, a “nexus between the creative inspiration of the artist, the technology of the recording studio, and the commercial aspirations of the record company” (Howlett 2012)? From observations of a studio recording session in Lisbon produced through Skype from New York, this article focuses on the role of the producer in these relatively new recording contexts involving long distance media networks. Methodology involved participant observation carried out in Estúdios Namouche in Lisbon (where the session took place), as part of doctoral research. This ethnographic approach also included a number of semi-directed ethnographic interviews of the different actors in this scenario—musicians, recording engineers, composers and producers. As a theoretical framework, the research of De Zutter and Sawyer on Distributed Creativity is used, as the recording studio sets an example of “a cognitive system where […] tasks are not accomplished by separate individuals, but rather through the interactions of those individuals” (DeZutter 2009:4). Therefore, creativity often emerges as a result of this interaction.
Resumo:
Purpose This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs, and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 mm to 100 mm, using a nominal photon energy of 6 MV. Results According to the practical definition established in this project, field sizes < 15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0 % to 2.0 %, or field size uncertainties are 0.5 mm, field sizes < 12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes < 12 mm. Source occlusion also caused a large change in OPF for field sizes < 8 mm. Based on the results of this study, field sizes < 12 mm were considered to be theoretically very small for 6 MV beams. Conclusions Extremely careful experimental methodology including the measurement of dosimetric field size at the same time as output factor measurement for each field size setting and also very precise detector alignment is required at field sizes at least < 12 mm and more conservatively < 15 mm for 6 MV beams. These recommendations should be applied in addition to all the usual considerations for small field dosimetry, including careful detector selection.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
Introduction Total scatter factor (or output factor) in megavoltage photon dosimetry is a measure of relative dose relating a certain field size to a reference field size. The use of solid phantoms has been well established for output factor measurements, however to date these phantoms have not been tested with small fields. In this work, we evaluate the water equivalency of a number of solid phantoms for small field output factor measurements using the EGSnrc Monte Carlo code. Methods The following small square field sizes were simulated using BEAMnrc: 5, 6, 7, 8, 10 and 30 mm. Each simulated phantom geometry was created in DOSXYZnrc and consisted of a silicon diode (of length and width 1.5 mm and depth 0.5 mm) submersed in the phantom at a depth of 5 g/cm2. The source-to-detector distance was 100 cm for all simulations. The dose was scored in a single voxel at the location of the diode. Interaction probabilities and radiation transport parameters for each material were created using custom PEGS4 files. Results A comparison of the resultant output factors in the solid phantoms, compared to the same factors in a water phantom are shown in Fig. 1. The statistical uncertainty in each point was less than or equal to 0.4 %. The results in Fig. 1 show that the density of the phantoms affected the output factor results, with higher density materials (such as PMMA) resulting in higher output factors. Additionally, it was also calculated that scaling the depth for equivalent path length had negligible effect on the output factor results at these field sizes. Discussion and conclusions Electron stopping power and photon mass energy absorption change minimally with small field size [1]. Also, it can be seen from Fig. 1 that the difference from water decreases with increasing field size. Therefore, the most likely cause for the observed discrepancies in output factors is differing electron disequilibrium as a function of phantom density. When measuring small field output factors in a solid phantom, it is important that the density is very close to that of water.
Resumo:
Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.
Resumo:
Introduction Given the known challenges of obtaining accurate measurements of small radiation fields, and the increasing use of small field segments in IMRT beams, this study examined the possible effects of referencing inaccurate field output factors in the planning of IMRT treatments. Methods This study used the Brainlab iPlan treatment planning system to devise IMRT treatment plans for delivery using the Brainlab m3 microMLC (Brainlab, Feldkirchen, Germany). Four pairs of sample IMRT treatments were planned using volumes, beams and prescriptions that were based on a set of test plans described in AAPM TG 119’s recommendations for the commissioning of IMRT treatment planning systems [1]: • C1, a set of three 4 cm volumes with different prescription doses, was modified to reduce the size of the PTV to 2 cm across and to include an OAR dose constraint for one of the other volumes. • C2, a prostate treatment, was planned as described by the TG 119 report [1]. • C3, a head-and-neck treatment with a PTV larger than 10 cm across, was excluded from the study. • C4, an 8 cm long C-shaped PTV surrounding a cylindrical OAR, was planned as described in the TG 119 report [1] and then replanned with the length of the PTV reduced to 4 cm. Both plans in each pair used the same beam angles, collimator angles, dose reference points, prescriptions and constraints. However, one of each pair of plans had its beam modulation optimisation and dose calculation completed with reference to existing iPlan beam data and the other had its beam modulation optimisation and dose calculation completed with reference to revised beam data. The beam data revisions consisted of increasing the field output factor for a 0.6 9 0.6 cm2 field by 17 % and increasing the field output factor for a 1.2 9 1.2 cm2 field by 3 %. Results The use of different beam data resulted in different optimisation results with different microMLC apertures and segment weightings between the two plans for each treatment, which led to large differences (up to 30 % with an average of 5 %) between reference point doses in each pair of plans. These point dose differences are more indicative of the modulation of the plans than of any clinically relevant changes to the overall PTV or OAR doses. By contrast, the maximum, minimum and mean doses to the PTVs and OARs were smaller (less than 1 %, for all beams in three out of four pairs of treatment plans) but are more clinically important. Of the four test cases, only the shortened (4 cm) version of TG 119’s C4 plan showed substantial differences between the overall doses calculated in the volumes of interest using the different sets of beam data and thereby suggested that treatment doses could be affected by changes to small field output factors. An analysis of the complexity of this pair of plans, using Crowe et al.’s TADA code [2], indicated that iPlan’s optimiser had produced IMRT segments comprised of larger numbers of small microMLC leaf separations than in the other three test cases. Conclusion: The use of altered small field output factors can result in substantially altered doses when large numbers of small leaf apertures are used to modulate the beams, even when treating relatively large volumes.
Resumo:
We hypothesized that Industry based learning and teaching, especially through industry assigned student projects or training programs, is an integral part of science, technology, engineering and mathematics (STEM) education. In this paper we show that industry-based student training and experience increases students’ academic performances independent to the organizational parameters and contexts. The literature on industry-based student training focuses on employability and the industry dimension, and neglects in many ways the academic dimension. We observed that the association factors between academic attributes and contributions of industry-based student training are central and vital to the technological learning experiences. We explore international initiatives and statistics collected of student projects in two categories: Industry based learning performances and on campus performances. The data collected were correlated to five (5) universities in different industrialized countries, e.g., Australia N=545, Norway N=279, Germany N=74, France N=107 and Spain N=802 respectively. We analyzed industry-based student training along with company assigned student projects compared with in comparisons to campus performance. The data that suggests a strong correlation between industry-based student training per se and improved performance profiles or increasing motivation shows that industry-based student training increases student academic performance independent of organizational parameters and contexts. The programs we augmented were orthogonal to each other however, the trend of the students’ academic performances are identical. An isolated cohort for the reported countries that opposed our hypothesis warrants further investigation.
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.