1000 resultados para bending loss
Resumo:
BACKGROUND: Web-based programs are a potential medium for supporting weight loss because of their accessibility and wide reach. Research is warranted to determine the shorter- and longer-term effects of these programs in relation to weight loss and other health outcomes.
OBJECTIVE: The aim was to evaluate the effects of a Web-based component of a weight loss service (Imperative Health) in an overweight/obese population at risk of cardiovascular disease (CVD) using a randomized controlled design and a true control group.
METHODS: A total of 65 overweight/obese adults at high risk of CVD were randomly allocated to 1 of 2 groups. Group 1 (n=32) was provided with the Web-based program, which supported positive dietary and physical activity changes and assisted in managing weight. Group 2 continued with their usual self-care (n=33). Assessments were conducted face-to-face. The primary outcome was between-group change in weight at 3 months. Secondary outcomes included between-group change in anthropometric measurements, blood pressure, lipid measurements, physical activity, and energy intake at 3, 6, and 12 months. Interviews were conducted to explore participants' views of the Web-based program.
RESULTS: Retention rates for the intervention and control groups at 3 months were 78% (25/32) vs 97% (32/33), at 6 months were 66% (21/32) vs 94% (31/33), and at 12 months were 53% (17/32) vs 88% (29/33). Intention-to-treat analysis, using baseline observation carried forward imputation method, revealed that the intervention group lost more weight relative to the control group at 3 months (mean -3.41, 95% CI -4.70 to -2.13 kg vs mean -0.52, 95% CI -1.55 to 0.52 kg, P<.001), at 6 months (mean -3.47, 95% CI -4.95 to -1.98 kg vs mean -0.81, 95% CI -2.23 to 0.61 kg, P=.02), but not at 12 months (mean -2.38, 95% CI -3.48 to -0.97 kg vs mean -1.80, 95% CI -3.15 to -0.44 kg, P=.77). More intervention group participants lost ≥5% of their baseline body weight at 3 months (34%, 11/32 vs 3%, 1/33, P<.001) and 6 months (41%, 13/32 vs 18%, 6/33, P=.047), but not at 12 months (22%, 7/32 vs 21%, 7/33, P=.95) versus control group. The intervention group showed improvements in total cholesterol, triglycerides, and adopted more positive dietary and physical activity behaviors for up to 3 months verus control; however, these improvements were not sustained.
CONCLUSIONS: Although the intervention group had high attrition levels, this study provides evidence that this Web-based program can be used to initiate clinically relevant weight loss and lower CVD risk up to 3-6 months based on the proportion of intervention group participants losing ≥5% of their body weight versus control group. It also highlights a need for augmenting Web-based programs with further interventions, such as in-person support to enhance engagement and maintain these changes.
Resumo:
In this paper, our previous work on Principal Component Analysis (PCA) based fault detection method is extended to the dynamic monitoring and detection of loss-of-main in power systems using wide-area synchrophasor measurements. In the previous work, a static PCA model was built and verified to be capable of detecting and extracting system faulty events; however the false alarm rate is high. To address this problem, this paper uses a well-known ‘time lag shift’ method to include dynamic behavior of the PCA model based on the synchronized measurements from Phasor Measurement Units (PMU), which is named as the Dynamic Principal Component Analysis (DPCA). Compared with the static PCA approach as well as the traditional passive mechanisms of loss-of-main detection, the proposed DPCA procedure describes how the synchrophasors are linearly
auto- and cross-correlated, based on conducting the singular value decomposition on the augmented time lagged synchrophasor matrix. Similar to the static PCA method, two statistics, namely T2 and Q with confidence limits are calculated to form intuitive charts for engineers or operators to monitor the loss-of-main situation in real time. The effectiveness of the proposed methodology is evaluated on the loss-of-main monitoring of a real system, where the historic data are recorded from PMUs installed in several locations in the UK/Ireland power system.
Resumo:
To determine the effect of microbial metabolites on the release of root exudates from perennial ryegrass, seedlings were pulse labelled with [14C]-CO2 in the presence of a range of soil micro-organisms. Microbial inoculants were spatially separated from roots by Millipore membranes so that root infection did not occur. Using this technique, only microbial metabolites affected root exudation. The effect of microbial metabolites on carbon assimilation and distribution and root exudation was determined for 15 microbial species. Assimilation of a pulse label varied by over 3.5 fold, dependent on inoculant. Distribution of the label between roots and shoots also varied with inoculant, but the carbon pool that was most sensitive to inoculation was root exudation. In the absence of a microbial inoculant only 1% of assimilated label was exuded. Inoculation of the microcosms always caused an increase in exudation but the percentage exuded varied greatly, within the range of 3-34%. © 1995 Kluwer Academic Publishers.
Resumo:
Single-Zone modelling is used to assess three 1D impeller loss model collections. An automotive turbocharger centrifugal compressor is used for evaluation. The individual 1D losses are presented relative to each other at three tip speeds to provide a visual description of each author’s perception of the relative importance of each loss. The losses are compared with their resulting prediction of pressure ratio and efficiency, which is further compared with test data; upon comparison, a combination of the 1D loss collections is identified as providing the best performance prediction. 3D CFD simulations have also been carried out for the same geometry using a single passage model. A method of extracting 1D losses from CFD is described and utilised to draw further comparisons with the 1D losses. A 1D scroll volute model has been added to the single passage CFD results; good agreement with the test data is achieved. Short-comings in the existing 1D loss models are identified as a result of the comparisons with 3D CFD losses. Further comparisons are drawn between the predicted 1D data, 3D CFD simulation results, and the test data using a nondimensional method to highlight where the current errors exist in the 1D prediction.
Resumo:
Abstract. Single-zone modelling is used to assess different collections of impeller 1D loss models. Three collections of loss models have been identified in literature, and the background to each of these collections is discussed. Each collection is evaluated using three modern automotive turbocharger style centrifugal compressors; comparisons of performance for each of the collections are made. An empirical data set taken from standard hot gas stand tests for each turbocharger is used as a baseline for comparison. Compressor range is predicted in this study; impeller diffusion ratio is shown to be a useful method of predicting compressor surge in 1D, and choke is predicted using basic compressible flow theory. The compressor designer can use this as a guide to identify the most compatible collection of losses for turbocharger compressor design applications. The analysis indicates the most appropriate collection for the design of automotive turbocharger centrifugal compressors.
Resumo:
High Voltage Direct Current (HVDC) electric power transmission is a promising technology for integrating offshore wind farms and interconnecting power grids in different regions. In order to maintain the DC voltage, droop control has been widely used. Transmission line loss constitutes an import part of the total power loss in a multi-terminal HVDC scheme. In this paper, the relation between droop controller design and transmission loss has been investigated. Different MTDC layout configurations are compared to examine the effect of droop controller design on the transmission loss.
Resumo:
Public concern over biodiversity loss is often rationalized as a threat to ecosystem functioning, but biodiversity-ecosystem functioning (BEF) relations are hard to empirically quantify at large scales. We use a realistic marine food-web model, resolving species over five trophic levels, to study how total fish production changes with species richness. This complex model predicts that BEF relations, on average, follow simple Michaelis-Menten curves when species are randomly deleted. These are shaped mainly by release of fish from predation, rather than the release from competition expected from simpler communities. Ordering species deletions by decreasing body mass or trophic level, representing 'fishing down the food web', accentuates prey-release effects and results in unimodal relationships. In contrast, simultaneous unselective harvesting diminishes these effects and produces an almost linear BEF relation, with maximum multispecies fisheries yield at approximate to 40% of initial species richness. These findings have important implications for the valuation of marine biodiversity.
Resumo:
The piezoresistance effect is defined as change in resistance due to applied stress. Silicon has a relatively large piezoresistance effect which has been known since 1954. A four point bending setup is proposed and designed to analyze the piezoresistance effect in p-type silicon. This setup is used to apply uniform and uniaxial stress along the <110> crystal direction. The main aim of this work is to investigate the piezoresistive characteristic of p-type resistors as a function of doping concentrations using COMSOL Multiphysics. Simulation results are compared with experimental data.
Resumo:
PTF11iqb was initially classified as a TypeIIn event caught very early after explosion. It showed narrow Wolf-Rayet (WR) spectral features on day 2, but the narrow emission weakened quickly and the spectrum morphed to resemble those of Types II-L and II-P. At late times, Halpha emission exhibited a complex, multipeaked profile reminiscent of SN1998S. In terms of spectroscopic evolution, we find that PTF11iqb was a near twin of SN~1998S, although with weaker interaction with circumstellar material (CSM) at early times, and stronger CSM interaction at late times. We interpret the spectral changes as caused by early interaction with asymmetric CSM that is quickly (by day 20) enveloped by the expanding SN ejecta photosphere, but then revealed again after the end of the plateau when the photosphere recedes. The light curve can be matched with a simple model for weak CSM interaction added to the light curve of a normal SN~II-P. This plateau requires that the progenitor had an extended H envelope like a red supergiant, consistent with the slow progenitor wind speed indicated by narrow emission. The cool supergiant progenitor is significant because PTF11iqb showed WR features in its early spectrum --- meaning that the presence of such WR features in an early SN spectrum does not necessarily indicate a WR-like progenitor. [abridged] Overall, PTF11iqb bridges SNe~IIn with weaker pre-SN mass loss seen in SNe II-L and II-P, implying a continuum between these types.
Resumo:
In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.
In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.
Resumo:
OBJECTIVES:
To compare methods to estimate the incidence of visual field progression used by 3 large randomized trials of glaucoma treatment by applying these methods to a common data set of annually obtained visual field measurements of patients with glaucoma followed up for an average of 6 years.
METHODS:
The methods used by the Advanced Glaucoma Intervention Study (AGIS), the Collaborative Initial Glaucoma Treatment Study (CIGTS), and the Early Manifest Glaucoma Treatment study (EMGT) were applied to 67 eyes of 56 patients with glaucoma enrolled in a 10-year natural history study of glaucoma using Program 30-2 of the Humphrey Field Analyzer (Humphrey Instruments, San Leandro, Calif). The incidence of apparent visual field progression was estimated for each method. Extent of agreement between the methods was calculated, and time to apparent progression was compared.
RESULTS:
The proportion of patients progressing was 11%, 22%, and 23% with AGIS, CIGTS, and EMGT methods, respectively. Clinical assessment identified 23% of patients who progressed, but only half of these were also identified by CIGTS or EMGT methods. The CIGTS and the EMGT had comparable incidence rates, but only half of those identified by 1 method were also identified by the other.
CONCLUSIONS:
The EMGT and CIGTS methods produced rates of apparent progression that were twice those of the AGIS method. Although EMGT, CIGTS, and clinical assessment rates were comparable, they did not identify the same patients as having had field progression.
Resumo:
PURPOSE: Subjects with significant peripheral field loss (PFL) self report difficulty in street crossing. In this study, we compared the traffic gap judgment ability of fully sighted and PFL subjects to determine whether accuracy in identifying crossable gaps was adversely affected because of field loss. Moreover, we explored the contribution of visual and nonvisual factors to traffic gap judgment ability. METHODS: Eight subjects with significant PFL as a result of advanced retinitis pigmentosa or glaucoma with binocular visual field <20 degrees and five age-matched normals (NV) were recruited. All subjects were required to judge when they perceived it was safe to cross at a 2-way 4-lane street while they stood on the curb. Eye movements were recorded by an eye tracker as the subjects performed the decision task. Movies of the eye-on-scene were made offline and fixation patterns were classified into either relevant or irrelevant. Subjects' street-crossing behavior, habitual approach to street crossing, and perceived difficulties were assessed. RESULTS: Compared with normal vision (NV) subjects, the PFL subjects identified 12% fewer crossable gaps while making 23% more errors by identifying a gap as crossable when it was too short (p < 0.05). The differences in traffic gap judgment ability of the PFL subjects might be explained by the significantly smaller fixation area (p = 0.006) and fewer fixations distributed to the relevant tasks (p = 0.001). The subjects' habitual approach to street crossing and perceived difficulties in street crossing (r > 0.60) were significantly correlated with traffic gap judgment performance. CONCLUSIONS: As a consequence of significant field loss, limited visual information about the traffic environment can be acquired, resulting in significantly reduced performance in judging safe crossable gaps. This poor traffic gap judgment ability in the PFL subjects raises important concerns for their safety when attempting to cross the street.
Resumo:
OBJECTIVES:
To describe a modified manual cataract extraction technique, sutureless large-incision manual cataract extraction (SLIMCE), and to report its clinical outcomes.
METHODS:
Case notes of 50 consecutive patients with cataract surgery performed using the SLIMCE technique were retrospectively reviewed. Clinical outcomes 3 months after surgery were analyzed, including postoperative uncorrected visual acuity, best-corrected visual acuity, intraoperative and postoperative complications, endothelial cell loss, and surgically induced astigmatism using the vector analysis method.
RESULTS:
At the 3-month follow-up, all 50 patients had postoperative best-corrected visual acuity of at least 20/60, and 37 patients (74%) had visual acuity of at least 20/30. Uncorrected visual acuity was at least 20/68 in 28 patients (56%) and was between 20/80 and 20/200 in 22 patients (44%). No significant intraoperative complications were encountered, and sutureless wounds were achieved in all but 2 patients. At the 3-month follow-up, endothelial cell loss was 3.9%, and the mean surgically induced astigmatism was 0.69 diopter.
CONCLUSIONS:
SLIMCE is a safe and effective manual cataract extraction technique with low rates of surgically induced astigmatism and endothelial cell loss. In view of its low cost, SLIMCE may have a potential role in reducing cataract blindness in developing countries.