746 resultados para High-bias breaking
Resumo:
Objectives To assess the effects of information interventions which orient patients and their carers/family to a cancer care facility and the services available within the facility. Design Systematic review of randomised controlled trials (RCTs), cluster RCTs and quasi-RCTs. Data sources MEDLINE, CINAHL, PsycINFO, EMBASE and the Cochrane Central Register of Controlled Trials. Methods We included studies evaluating the effect of an orientation intervention, compared with a control group which received usual care, or with trials comparing one orientation intervention with another orientation intervention. Results Four RCTs of 610 participants met the criteria for inclusion. Findings from two RCTs demonstrated significant benefits of the orientation intervention in relation to reduced levels of distress (mean difference (MD): −8.96, 95% confidence interval (95%CI): −11.79 to −6.13), but non-significant benefits in relation to the levels state anxiety levels (MD −9.77) (95%CI: −24.96 to 5.41). There are insufficient data on the other outcomes of interest. Conclusions This review has demonstrated the feasibility and some potential benefits of orientation interventions. There was a low level of evidence to suggest that orientation interventions can reduce distress in patients. However, other outcomes, including patient knowledge recall/satisfaction, remain inconclusive. The majority of trials were subjected to high risk of bias and were likely to be insufficiently powered. Further well conducted and powered RCTs are required to provide evidence for determining the most appropriate intensity, nature, mode and resources for such interventions. Patient and carer-focused outcomes should be included.
Resumo:
Positive and negative small ions, aerosol ion and number concentration and dc electric fields were monitored at an overhead high-voltage power line site. We show that the emission of corona ions was not spatially uniform along the lines and occurred from discrete components such as a particular set of spacers. Maximum ion concentrations and atmospheric dc electric fields were observed at a point 20 m downwind of the lines. It was estimated that less than 7% of the total number of aerosol particles was charged. The electrical parameters decreased steadily with further downwind distance but remained significantly higher than background.
Resumo:
A novel concept of producing high dc voltage for pulsed-power applications is proposed in this paper. The topology consists of an LC resonant circuit supplied through a tuned alternating waveform that is produced by an inverter. The control scheme is based on the detection of variations in the resonant frequency and adjustment of the switching signal patterns for the inverter to produce a square waveform with exactly the same frequencies. Therefore the capacitor voltage oscillates divergently with an increasing amplitude. A simple one-stage capacitor-diode voltage multiplier (CDVM) connected to the resonant capacitor then rectifies the alternating voltage and gives a dc level equal to twice the input voltage amplitude. The produced high voltage appears then in the form of high-voltage pulses across the load. A basic model is simulated by Simulink platform of MATLAB and the results are included in the paper.
Resumo:
In this report we take a look at what separates high potential emerging and young start-ups from others. We compare the characteristics, intentions and behaviours of start-ups that we judge to be 'high potential' with other start-ups. We utilise the first two years of data from the CAUSEE study. We also compare Australian start-ups with a similar study conduced in the US.
Resumo:
Applying ice or other forms of topical cooling is a popular method of treating sports injuries. It is commonplace for athletes to return to competitive activity, shortly or immediately after the application of a cold treatment. In this article, we examine the effect of local tissue cooling on outcomes relating to functional performance and to discuss their relevance to the sporting environment. A computerized literature search, citation tracking and hand search was performed up to April, 2011. Eligible studies were trials involving healthy human participants, describing the effects of cooling on outcomes relating to functional performance. Two reviewers independently assessed the validity of included trials and calculated effect sizes. Thirty five trials met the inclusion criteria; all had a high risk of bias. The mean sample size was 19. Meta-analyses were not undertaken due to clinical heterogeneity. The majority of studies used cooling durations >20 minutes. Strength (peak torque/force) was reported by 25 studies with approximately 75% recording a decrease in strength immediately following cooling. There was evidence from six studies that cooling adversely affected speed, power and agility-based running tasks; two studies found this was negated with a short rewarming period. There was conflicting evidence on the effect of cooling on isolated muscular endurance. A small number of studies found that cooling decreased upper limb dexterity and accuracy. The current evidence base suggests that athletes will probably be at a performance disadvantage if they return to activity immediately after cooling. This is based on cooling for longer than 20 minutes, which may exceed the durations employed in some sporting environments. In addition, some of the reported changes were clinically small and may only be relevant in elite sport. Until better evidence is available, practitioners should use short cooling applications and/or undertake a progressive warm up prior to returning to play.
Resumo:
In recent years, a number of phylogenetic methods have been developed for estimating molecular rates and divergence dates under models that relax the molecular clock constraint by allowing rate change throughout the tree. These methods are being used with increasing frequency, but there have been few studies into their accuracy. We tested the accuracy of several relaxed-clock methods (penalized likelihood and Bayesian inference using various models of rate change) using nucleotide sequences simulated on a nine-taxon tree. When the sequences evolved with a constant rate, the methods were able to infer rates accurately, but estimates were more precise when a molecular clock was assumed. When the sequences evolved under a model of autocorrelated rate change, rates were accurately estimated using penalized likelihood and by Bayesian inference using lognormal and exponential models of rate change, while other models did not perform as well. When the sequences evolved under a model of uncorrelated rate change, only Bayesian inference using an exponential rate model performed well. Collectively, the results provide a strong recommendation for using the exponential model of rate change if a conservative approach to divergence time estimation is required. A case study is presented in which we use a simulation-based approach to examine the hypothesis of elevated rates in the Cambrian period, and it is found that these high rate estimates might be an artifact of the rate estimation method. If this bias is present, then the ages of metazoan divergences would be systematically underestimated. The results of this study have implications for studies of molecular rates and divergence dates.
Resumo:
Sequence data often have competing signals that are detected by network programs or Lento plots. Such data can be formed by generating sequences on more than one tree, and combining the results, a mixture model. We report that with such mixture models, the estimates of edge (branch) lengths from maximum likelihood (ML) methods that assume a single tree are biased. Based on the observed number of competing signals in real data, such a bias of ML is expected to occur frequently. Because network methods can recover competing signals more accurately, there is a need for ML methods allowing a network. A fundamental problem is that mixture models can have more parameters than can be recovered from the data, so that some mixtures are not, in principle, identifiable. We recommend that network programs be incorporated into best practice analysis, along with ML and Bayesian trees.
Resumo:
Despite recent methodological advances in inferring the time-scale of biological evolution from molecular data, the fundamental question of whether our substitution models are sufficiently well specified to accurately estimate branch-lengths has received little attention. I examine this implicit assumption of all molecular dating methods, on a vertebrate mitochondrial protein-coding dataset. Comparison with analyses in which the data are RY-coded (AG → R; CT → Y) suggests that even rates-across-sites maximum likelihood greatly under-compensates for multiple substitutions among the standard (ACGT) NT-coded data, which has been subject to greater phylogenetic signal erosion. Accordingly, the fossil record indicates that branch-lengths inferred from the NT-coded data translate into divergence time overestimates when calibrated from deeper in the tree. Intriguingly, RY-coding led to the opposite result. The underlying NT and RY substitution model misspecifications likely relate respectively to “hidden” rate heterogeneity and changes in substitution processes across the tree, for which I provide simulated examples. Given the magnitude of the inferred molecular dating errors, branch-length estimation biases may partly explain current conflicts with some palaeontological dating estimates.
Resumo:
At the core of our uniquely human cognitive abilities is the capacity to see things from different perspectives, or to place them in a new context. We propose that this was made possible by two cognitive transitions. First, the large brain of Homo erectus facilitated the onset of recursive recall: the ability to string thoughts together into a stream of potentially abstract or imaginative thought. This hypothesis is sup-ported by a set of computational models where an artificial society of agents evolved to generate more diverse and valuable cultural outputs under conditions of recursive recall. We propose that the capacity to see things in context arose much later, following the appearance of anatomically modern humans. This second transition was brought on by the onset of contextual focus: the capacity to shift between a minimally contextual analytic mode of thought, and a highly contextual associative mode of thought, conducive to combining concepts in new ways and ‘breaking out of a rut’. When contextual focus is implemented in an art-generating computer program, the resulting artworks are seen as more creative and appealing. We summarize how both transitions can be modeled using a theory of concepts which high-lights the manner in which different contexts can lead to modern humans attributing very different meanings to the interpretation of one concept.
Resumo:
In Strong v Woolworth Ltd (t/as Big W) (2012) 285 ALR 420 the appellant was injured when she fell at a shopping centre outside the respondent’s premises. The appellant was disabled, having had her right leg amputated above the knee and therefore walked with crutches. One of the crutches came into contact with a hot potato chip which was on the floor, causing the crutch to slip and the appellant to fall. The appellant sued in negligence, alleging that the respondent was in breach of its duty of care by failing to institute and maintain a cleaning system to detect spillages and foreign objects within its sidewalk sales area. The issue before the High Court was whether it could be established on the balance of probabilities as to when the hot chip had fallen onto the ground so as to prove causation in fact...
Resumo:
A high sensitive fiber Bragg grating (FBG) strain sensor with automatic temperature compensation is demonstrated. FBG is axially linked with a stick and their free ends are fixed to the measured object. When the measured strain changes, the stick does not change in length, but the FBG does. When the temperature changes, the stick changes in length to pull the FBG to realize temperature compensation. In experiments, 1.45 times strain sensitivity of bare FBG with temperature compensation of less than 0.1 nm Bragg wavelength drift over 100 ◦C shift is achieved.
Resumo:
At cryogenic temperature, a fiber Bragg grating (FBG) temperature sensor with controllable sensitivity and variable measurement range is demonstrated by using bimetal configuration. In experiments, sensitivities of -51.2, -86.4, and -520 pm/K are achieved by varying the lengths of the metals. Measurement ranges of 293-290.5, 283-280.5, and 259-256.5 K are achieved by shortening the distance of the gap among the metals.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Resumo:
Sustainability is an issue for everyone. For instance, the higher education sector is being asked to take an active part in creating a sustainable future, due to their moral responsibility, social obligation, and their own need to adapt to the changing higher education environment. By either signing declarations or making public statements, many universities are expressing their desire to become role models for enhancing sustainability. However, too often they have not delivered as much as they had intended. This is particularly evident in the lack of physical implementation of sustainable practices in the campus environment. Real projects such as green technologies on campus have the potential to rectify the problem in addition to improving building performance. Despite being relatively recent innovations, Green Roof and Living Wall have been widely recognized because of their substantial benefits, such as runoff water reduction, noise insulation, and the promotion of biodiversity. While they can be found in commercial and residential buildings, they only appear infrequently on campuses as universities have been very slow to implement sustainability innovations. There has been very little research examining the fundamental problems from the organizational perspective. To address this deficiency, the researchers designed and carried out 24 semi-structured interviews to investigate the general organizational environment of Australian universities with the intention to identify organizational obstacles to the delivery of Green Roof and Living Wall projects. This research revealed that the organizational environment of Australian universities still has a lot of room to be improved in order to accommodate sustainability practices. Some of the main organizational barriers to the adoption of sustainable innovations were identified including lack of awareness and knowledge, the absence of strong supportive leadership, a weak sustainability-rooted culture and several management challenges. This led to the development of a set of strategies to help optimize the organizational environment for the purpose of better decision making for Green Roof and Living Wall implementation.
Resumo:
Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.