919 resultados para wood load
Resumo:
The tests used to obtain the stiffness properties of wood are made with two loading cycles, as defined by the Brazilian standard ABNT NBR 7190 (Design of Timber Structures). However, the possibility of reducing the number of cycles allows decrease the operating time of the machine, resulting in reduced spending on electricity used during the tests. This research aimed to investigate, with the aid of the analysis of variance (ANOVA), the influence of the use of three load cycles to obtain the modulus of elasticity in compression parallel to grain (Ec0), in tensile parallel to the grain (Et0), in bending (Em) and in compression perpendicular to the grain (Ec90) of Angico Preto (Anadenanthera macrocarpa) wood specie. For the number of cycles and stiffness were manufactured 12 samples, totaling 144 specimens. The results of the ANOVA revealed statistical equivalence between the stiffness properties for both load cycle numbers evaluated, indicating that it is possible to carry out the tests with a single charge cycle, allowing savings in time and energy in the operation of the equipment.
Resumo:
Fire safety has become an important part in structural design due to the ever increasing loss of properties and lives during fires. Conventionally the fire rating of load bearing wall systems made of Light gauge Steel Frames (LSF) is determined using fire tests based on the standard time-temperature curve given in ISO 834 (ISO, 1999). The standard time-temperature curve given in ISO 834 (ISO, 1999) originated from the application of wood burning furnaces in the early 1900s. However, modern commercial and residential buildings make use of thermoplastic materials, which mean considerably high fuel loads. Hence a detailed fire research study into the performance of LSF walls was undertaken using the developed real fire curves based on Eurocode parametric curves (ECS, 2002) and Barnett’s BFD curves (Barnett, 2002) using both full scale fire tests and numerical studies. It included LSF walls without any insulation, and the recently developed externally insulated composite panel system. This paper presents the details of the numerical studies and the results. It also includes brief details of the development of real building fire curves and experimental studies.
Resumo:
Abstract. Fire resistance has become an important part in structural design due to the ever increasing loss of properties and lives every year. Conventionally the fire rating of load bearing Light gauge Steel Frame (LSF) walls is determined using standard fire tests based on the time-temperature curve given in ISO 834 [1]. Full scale fire testing based on this standard time-temperature curve originated from the application of wood burning furnaces in the early 1900s and it is questionable whether it truly represents the fuel loads in modern buildings. Hence a detailed fire research study into the performance of LSF walls was undertaken using real design fires based on Eurocode parametric curves [2] and Barnett’s ‘BFD’ curves [3]. This paper presents the development of these real fire curves and the results of full scale experimental study into the structural and fire behaviour of load bearing LSF stud wall systems.
Resumo:
Male sex-biased parasitism (SBP) occurs across a range of mammalian taxa and two contrasting sets of hypotheses have been suggested for its establishment. The first invokes body size per se and suggests that larger individuals are either a larger target for parasites, trade off growth at the expense of immunity or cope better with parasitism than smaller individuals. The second suggests a sex-specific handicap whereby males have reduced immunocompetence compared to females due to the immunodepressive effects of testosterone. The current study investigated whether sex-biased parasitism is driven by host 'body size' or 'sex' using a rodent-tick (Apodemus sylvaticus-. Ixodes ricinus) system. Moreover, the presence or absence of large mammals at study sites were used to control the presence of immature ticks infesting wood mice, allowing the impacts of parasitism on host body mass and female reproduction to be assessed. As expected, male mice had greater tick loads than females and analyses suggested this sex-bias was driven by body mass as opposed to sex. It is therefore likely that larger individuals are a larger target for parasites, trade off growth at the expense of immunity or adapt behavioural responses to parasitism based on their body size. Parasite load had no effect on host body mass or female reproductive output suggesting individuals may alter behaviour or life history strategies to compensate for costs incurred through parasitism. Overall, this study lends support to the 'body size' hypothesis for the formation of sex-biased parasitism.
Resumo:
Emissions from residential combustion appliances vary significantly depending on the firing behaviours and combustion conditions, in addition to combustion technologies and fuel quality. Although wood pellet combustion in residential heating boilers is efficient, the combustion conditions during start-up and stop phases are not optimal and produce significantly high emissions such as carbon monoxide and hydrocarbon from incomplete combustion. The emissions from the start-up and stop phases of the pellet boilers are not fully taken into account in test methods for ecolabels which primarily focus on emissions during operation on full load and part load. The objective of the thesis is to investigate the emission characteristics during realistic operation of residential wood pellet boilers in order to identify when the major part of the annual emissions occur. Emissions from four residential wood pellet boilers were measured and characterized for three operating phases (start-up, steady and stop). Emissions from realistic operation of combined solar and wood pellet heating systems was continuously measured to investigate the influence of start-up and stop phases on total annual emissions. Measured emission data from the pellet devices were used to build an emission model to predict the annual emission factors from the dynamic operation of the heating system using the simulation software TRNSYS. Start-up emissions are found to vary with ignition type, supply of air and fuel, and time to complete the phase. Stop emissions are influenced by fan operation characteristics and the cleaning routine. Start-up and stop phases under realistic operation conditions contribute 80 – 95% of annual carbon monoxide (CO) emission, 60 – 90% total hydrocarbon (TOC), 10 – 20% of nitrogen oxides (NO), and 30 – 40% particles emissions. Annual emission factors from realistic operation of tested residential heating system with a top fed wood pelt boiler can be between 190 and 400 mg/MJ for the CO emissions, between 60 and 95 mg/MJ for the NO, between 6 and 25 mg/MJ for the TOC, between 30 and 116 mg/MJ for the particulate matter and between 2x10-13 /MJ and 4x10-13 /MJ for the number of particles. If the boiler has the cleaning sequence with compressed air such as in boiler B2, annual CO emission factor can be up to 550 mg/MJ. Average CO, TOC and particles emissions under realistic annual condition were greater than the limits values of two eco labels. These results highlight the importance of start-up and stop phases in annual emission factors (especially CO and TOC). Since a large or dominating part of the annual emissions in real operation arise from the start-up and stop sequences, test methods required by the ecolabels should take these emissions into account. In this way it will encourage the boiler manufacturers to minimize annual emissions. The annual emissions of residential pellet heating system can be reduced by optimizing the number of start-ups of the pellet boiler. It is possible to reduce up to 85% of the number of start-ups by optimizing the system design and its controller such as switching of the boiler pump after it stops, using two temperature sensors for boiler ON/OFF control, optimizing of the positions of the connections to the storage tank, increasing the mixing valve temperature in the boiler circuit and decreasing the pump flow rate. For 85 % reduction of start-ups, 75 % of CO and TOC emission factors were reduced while 13% increase in NO and 15 % increase in particle emissions was observed.
Resumo:
In low-income settings, treatment failure is often identified using CD4 cell count monitoring. Consequently, patients remain on a failing regimen, resulting in a higher risk of transmission. We investigated the benefit of routine viral load monitoring for reducing HIV transmission.
Resumo:
Objectives To determine the improvement in positive predictive value of immunological failure criteria for identifying virological failure in HIV-infected children on antiretroviral therapy (ART) when a single targeted viral load measurement is performed in children identified as having immunological failure. Methods Analysis of data from children (<16 years at ART initiation) at South African ART sites at which CD4 count/per cent and HIV-RNA monitoring are performed 6-monthly. Immunological failure was defined according to both WHO 2010 and United States Department of Health and Human Services (DHHS) 2008 criteria. Confirmed virological failure was defined as HIV-RNA >5000 copies/ml on two consecutive occasions <365 days apart in a child on ART for ≥18 months. Results Among 2798 children on ART for ≥18 months [median (IQR) age 50 (21-84) months at ART initiation], the cumulative probability of confirmed virological failure by 42 months on ART was 6.3%. Using targeted viral load after meeting DHHS immunological failure criteria rather than DHHS immunological failure criteria alone increased positive predictive value from 28% to 82%. Targeted viral load improved the positive predictive value of WHO 2010 criteria for identifying confirmed virological failure from 49% to 82%. Conclusion The addition of a single viral load measurement in children identified as failing immunologically will prevent most switches to second-line treatment in virologically suppressed children.
Resumo:
High flexural strength and stiffness can be achieved by forming a thin panel into a wave shape perpendicular to the bending direction. The use of corrugated shapes to gain flexural strength and stiffness is common in metal and reinforced plastic products. However, there is no commercial production of corrugated wood composite panels. This research focuses on the application of corrugated shapes to wood strand composite panels. Beam theory, classical plate theory and finite element models were used to analyze the bending behavior of corrugated panels. The most promising shallow corrugated panel configuration was identified based on structural performance and compatibility with construction practices. The corrugation profile selected has a wavelength equal to 8”, a channel depth equal to ¾”, a sidewall angle equal to 45 degrees and a panel thickness equal to 3/8”. 16”x16” panels were produced using random mats and 3-layer aligned mats with surface flakes parallel to the channels. Strong axis and weak axis bending tests were conducted. The test results indicate that flake orientation has little effect on the strong axis bending stiffness. The 3/8” thick random mat corrugated panels exhibit bending stiffness (400,000 lbs-in2/ft) and bending strength (3,000 in-lbs/ft) higher than 23/32” or 3/4” thick APA Rated Sturd-I-Floor with a 24” o.c. span rating. Shear and bearing test results show that the corrugated panel can withstand more than 50 psf of uniform load at 48” joist spacings. Molding trials on 16”x16” panels provided data for full size panel production. Full size 4’x8’ shallow corrugated panels were produced with only minor changes to the current oriented strandboard manufacturing process. Panel testing was done to simulate floor loading during construction, without a top underlayment layer, and during occupancy, with an underlayment over the panel to form a composite deck. Flexural tests were performed in single-span and two-span bending with line loads applied at mid-span. The average strong axis bending stiffness and bending strength of the full size corrugated panels (without the underlayment) were over 400,000 lbs-in2/ft and 3,000 in-lbs/ft, respectively. The composite deck system, which consisted of an OSB sheathing (15/32” thick) nailed-glued (using 3d ringshank nails and AFG-01 subfloor adhesive) to the corrugated subfloor achieved about 60% of the full composite stiffness resulting in about 3 times the bending stiffness of the corrugated subfloor (1,250,000 lbs-in2/ft). Based on the LRFD design criteria, the corrugated composite floor system can carry 40 psf of unfactored uniform loads, limited by the L/480 deflection limit state, at 48” joist spacings. Four 10-ft long composite T-beam specimens were built and tested for the composite action and the load sharing between a 24” wide corrugated deck system and the supporting I-joist. The average bending stiffness of the composite T-beam was 1.6 times higher than the bending stiffness of the I-joist. A 8-ft x 12-ft mock up floor was built to evaluate construction procedures. The assembly of the composite floor system is relatively simple. The corrugated composite floor system might be able to offset the cheaper labor costs of the single-layer Sturd-IFloor through the material savings. However, no conclusive result can be drawn, in terms of the construction costs, at this point without an in depth cost analysis of the two systems. The shallow corrugated composite floor system might be a potential alternative to the Sturd-I-Floor in the near future because of the excellent flexural stiffness provided.
Resumo:
Strain rate significantly affects the strength of a material. The Split-Hopkinson Pressure Bar (SHPB) was initially used to study the effects of high strain rate (~103 1/s) testing of metals. Later modifications to the original technique allowed for the study of brittle materials such as ceramics, concrete, and rock. While material properties of wood for static and creep strain rates are readily available, data on the dynamic properties of wood are sparse. Previous work using the SHPB technique with wood has been limited in scope to variability of only a few conditions and tests of the applicability of the SHPB theory on wood have not been performed. Tests were conducted using a large diameter (3.0 inch (75 mm)) SHPB. The strain rate and total strain applied to a specimen are dependent on the striker bar length and velocity at impact. Pulse shapers are used to further modify the strain rate and change the shape of the strain pulse. A series of tests were used to determine test conditions necessary to produce a strain rate, total strain, and pulse shape appropriate for testing wood specimens. Hard maple, consisting of sugar maple (Acer saccharum) and black maple (Acer nigrum), and eastern white pine (Pinus strobus) specimens were used to represent a dense hardwood and a low-density soft wood. Specimens were machined to diameters of 2.5 and 3.0 inches and an assortment of lengths were tested to determine the appropriate specimen dimensions. Longitudinal specimens of 1.5 inch length and radial and tangential specimens of 0.5 inch length were found to be most applicable to SHPB testing. Stress/strain curves were generated from the SHPB data and validated with 6061-T6 aluminum and wood specimens. Stress was indirectly corroborated with gaged aluminum specimens. Specimen strain was assessed with strain gages, digital image analysis, and measurement of residual strain to confirm the strain calculated from SHPB data. The SHPB was found to be a useful tool in accurately assessing the material properties of wood under high strain rates (70 to 340 1/s) and short load durations (70 to 150 μs to compressive failure).
Resumo:
BACKGROUND: In high-income countries, viral load is routinely measured to detect failure of antiretroviral therapy (ART) and guide switching to second-line ART. Viral load monitoring is not generally available in resource-limited settings. We examined switching from nonnucleoside reverse transcriptase inhibitor (NNRTI)-based first-line regimens to protease inhibitor-based regimens in Africa, South America and Asia. DESIGN AND METHODS: Multicohort study of 17 ART programmes. All sites monitored CD4 cell count and had access to second-line ART and 10 sites monitored viral load. We compared times to switching, CD4 cell counts at switching and obtained adjusted hazard ratios for switching (aHRs) with 95% confidence intervals (CIs) from random-effects Weibull models. RESULTS: A total of 20 113 patients, including 6369 (31.7%) patients from 10 programmes with access to viral load monitoring, were analysed; 576 patients (2.9%) switched. Low CD4 cell counts at ART initiation were associated with switching in all programmes. Median time to switching was 16.3 months [interquartile range (IQR) 10.1-26.6] in programmes with viral load monitoring and 21.8 months (IQR 14.0-21.8) in programmes without viral load monitoring (P < 0.001). Median CD4 cell counts at switching were 161 cells/microl (IQR 77-265) in programmes with viral load monitoring and 102 cells/microl (44-181) in programmes without viral load monitoring (P < 0.001). Switching was more common in programmes with viral load monitoring during months 7-18 after starting ART (aHR 1.38; 95% CI 0.97-1.98), similar during months 19-30 (aHR 0.97; 95% CI 0.58-1.60) and less common during months 31-42 (aHR 0.29; 95% CI 0.11-0.79). CONCLUSION: In resource-limited settings, switching to second-line regimens tends to occur earlier and at higher CD4 cell counts in ART programmes with viral load monitoring compared with programmes without viral load monitoring.
Resumo:
BACKGROUND Monitoring of HIV viral load in patients on combination antiretroviral therapy (ART) is not generally available in resource-limited settings. We examined the cost-effectiveness of qualitative point-of-care viral load tests (POC-VL) in sub-Saharan Africa. DESIGN Mathematical model based on longitudinal data from the Gugulethu and Khayelitsha township ART programmes in Cape Town, South Africa. METHODS Cohorts of patients on ART monitored by POC-VL, CD4 cell count or clinically were simulated. Scenario A considered the more accurate detection of treatment failure with POC-VL only, and scenario B also considered the effect on HIV transmission. Scenario C further assumed that the risk of virologic failure is halved with POC-VL due to improved adherence. We estimated the change in costs per quality-adjusted life-year gained (incremental cost-effectiveness ratios, ICERs) of POC-VL compared with CD4 and clinical monitoring. RESULTS POC-VL tests with detection limits less than 1000 copies/ml increased costs due to unnecessary switches to second-line ART, without improving survival. Assuming POC-VL unit costs between US$5 and US$20 and detection limits between 1000 and 10,000 copies/ml, the ICER of POC-VL was US$4010-US$9230 compared with clinical and US$5960-US$25540 compared with CD4 cell count monitoring. In Scenario B, the corresponding ICERs were US$2450-US$5830 and US$2230-US$10380. In Scenario C, the ICER ranged between US$960 and US$2500 compared with clinical monitoring and between cost-saving and US$2460 compared with CD4 monitoring. CONCLUSION The cost-effectiveness of POC-VL for monitoring ART is improved by a higher detection limit, by taking the reduction in new HIV infections into account and assuming that failure of first-line ART is reduced due to targeted adherence counselling.
Resumo:
OBJECTIVES Many paediatric antiretroviral therapy (ART) programmes in Southern Africa rely on CD4⁺ to monitor ART. We assessed the benefit of replacing CD4⁺ by viral load monitoring. DESIGN A mathematical modelling study. METHODS A simulation model of HIV progression over 5 years in children on ART, parameterized by data from seven South African cohorts. We simulated treatment programmes with 6-monthly CD4⁺ or 6- or 12-monthly viral load monitoring. We compared mortality, second-line ART use, immunological failure and time spent on failing ART. In further analyses, we varied the rate of virological failure, and assumed that the rate is higher with CD4⁺ than with viral load monitoring. RESULTS About 7% of children were predicted to die within 5 years, independent of the monitoring strategy. Compared with CD4⁺ monitoring, 12-monthly viral load monitoring reduced the 5-year risk of immunological failure from 1.6 to 1.0% and the mean time spent on failing ART from 6.6 to 3.6 months; 1% of children with CD4⁺ compared with 12% with viral load monitoring switched to second-line ART. Differences became larger when assuming higher rates of virological failure. When assuming higher virological failure rates with CD4⁺ than with viral load monitoring, up to 4.2% of children with CD4⁺ compared with 1.5% with viral load monitoring experienced immunological failure; the mean time spent on failing ART was 27.3 months with CD4⁺ monitoring and 6.0 months with viral load monitoring. Conclusion: Viral load monitoring did not affect 5-year mortality, but reduced time on failing ART, improved immunological response and increased switching to second-line ART.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.