991 resultados para Amount h-b CH4
Resumo:
PURPOSE: To examine the relationship between contact lens (CL) case contamination and various potential predictive factors. METHODS: 74 subjects were fitted with lotrafilcon B (CIBA Vision) CLs on a daily wear basis for 1 month. Subjects were randomly assigned one of two polyhexamethylene biguanide (PHMB) preserved disinfecting solutions with the corresponding regular lens case. Clinical evaluations were conducted at lens delivery and after 1 month, when cases were collected for microbial culture. A CL care non-compliance score was determined through administration of a questionnaire and the volume of solution used was calculated for each subject. Data was examined using backward stepwise binary logistic regression. RESULTS: 68% of cases were contaminated. 35% were moderately or heavily contaminated and 36% contained gram-negative bacteria. Case contamination was significantly associated with subjective dryness symptoms (OR 4.22, CI 1.37–13.01) (P<0.05). There was no association between contamination and subject age, ethnicity, gender, average wearing time, amount of solution used, non-compliance score, CL power and subjective redness (P>0.05). The effect of lens care system on case contamination approached significance (P=0.07). Failure to rinse the case with disinfecting solution following CL insertion (OR 2.51, CI 0.52–12.09) and not air drying the case (OR 2.31, CI 0.39–13.35) were positively correlated with contamination; however, did not reach statistical significance. CONCLUSIONS: Our results suggest that case contamination may influence subjective comfort. It is difficult to predict the development of case contamination from a variety of clinical factors. The efficacy of CL solutions, bacterial resistance to disinfection and biofilm formation are likely to play a role. Further evaluation of these factors will improve our understanding of the development of case contamination and its clinical impact.
Resumo:
Metallic materials exposed to oxygen-enriched atmospheres – as commonly used in the medical, aerospace, aviation and numerous chemical processing industries – represent a significant fire hazard which must be addressed during design, maintenance and operation. Hence, accurate knowledge of metallic materials flammability is required. Reduced gravity (i.e. space-based) operations present additional unique concerns, where the absence of gravity must also be taken into account. The flammability of metallic materials has historically been quantified using three standardised test methods developed by NASA, ASTM and ISO. These tests typically involve the forceful (promoted) ignition of a test sample (typically a 3.2 mm diameter cylindrical rod) in pressurised oxygen. A test sample is defined as flammable when it undergoes burning that is independent of the ignition process utilised. In the standardised tests, this is indicated by the propagation of burning further than a defined amount, or „burn criterion.. The burn criterion in use at the onset of this project was arbitrarily selected, and did not accurately reflect the length a sample must burn in order to be burning independent of the ignition event and, in some cases, required complete consumption of the test sample for a metallic material to be considered flammable. It has been demonstrated that a) a metallic material.s propensity to support burning is altered by any increase in test sample temperature greater than ~250-300 oC and b) promoted ignition causes an increase in temperature of the test sample in the region closest to the igniter, a region referred to as the Heat Affected Zone (HAZ). If a test sample continues to burn past the HAZ (where the HAZ is defined as the region of the test sample above the igniter that undergoes an increase in temperature of greater than or equal to 250 oC by the end of the ignition event), it is burning independent of the igniter, and should be considered flammable. The extent of the HAZ, therefore, can be used to justify the selection of the burn criterion. A two dimensional mathematical model was developed in order to predict the extent of the HAZ created in a standard test sample by a typical igniter. The model was validated against previous theoretical and experimental work performed in collaboration with NASA, and then used to predict the extent of the HAZ for different metallic materials in several configurations. The extent of HAZ predicted varied significantly, ranging from ~2-27 mm depending on the test sample thermal properties and test conditions (i.e. pressure). The magnitude of the HAZ was found to increase with increasing thermal diffusivity, and decreasing pressure (due to slower ignition times). Based upon the findings of this work, a new burn criterion requiring 30 mm of the test sample to be consumed (from the top of the ignition promoter) was recommended and validated. This new burn criterion was subsequently included in the latest revision of the ASTM G124 and NASA 6001B international test standards that are used to evaluate metallic material flammability in oxygen. These revisions also have the added benefit of enabling the conduct of reduced gravity metallic material flammability testing in strict accordance with the ASTM G124 standard, allowing measurement and comparison of the relative flammability (i.e. Lowest Burn Pressure (LBP), Highest No-Burn Pressure (HNBP) and average Regression Rate of the Melting Interface(RRMI)) of metallic materials in normal and reduced gravity, as well as determination of the applicability of normal gravity test results to reduced gravity use environments. This is important, as currently most space-based applications will typically use normal gravity information in order to qualify systems and/or components for reduced gravity use. This is shown here to be non-conservative for metallic materials which are more flammable in reduced gravity. The flammability of two metallic materials, Inconel® 718 and 316 stainless steel (both commonly used to manufacture components for oxygen service in both terrestrial and space-based systems) was evaluated in normal and reduced gravity using the new ASTM G124-10 test standard. This allowed direct comparison of the flammability of the two metallic materials in normal gravity and reduced gravity respectively. The results of this work clearly show, for the first time, that metallic materials are more flammable in reduced gravity than in normal gravity when testing is conducted as described in the ASTM G124-10 test standard. This was shown to be the case in terms of both higher regression rates (i.e. faster consumption of the test sample – fuel), and burning at lower pressures in reduced gravity. Specifically, it was found that the LBP for 3.2 mm diameter Inconel® 718 and 316 stainless steel test samples decreased by 50% from 3.45 MPa (500 psia) in normal gravity to 1.72 MPa (250 psia) in reduced gravity for the Inconel® 718, and 25% from 3.45 MPa (500 psia) in normal gravity to 2.76 MPa (400 psia) in reduced gravity for the 316 stainless steel. The average RRMI increased by factors of 2.2 (27.2 mm/s in 2.24 MPa (325 psia) oxygen in reduced gravity compared to 12.8 mm/s in 4.48 MPa (650 psia) oxygen in normal gravity) for the Inconel® 718 and 1.6 (15.0 mm/s in 2.76 MPa (400 psia) oxygen in reduced gravity compared to 9.5 mm/s in 5.17 MPa (750 psia) oxygen in normal gravity) for the 316 stainless steel. Reasons for the increased flammability of metallic materials in reduced gravity compared to normal gravity are discussed, based upon the observations made during reduced gravity testing and previous work. Finally, the implications (for fire safety and engineering applications) of these results are presented and discussed, in particular, examining methods for mitigating the risk of a fire in reduced gravity.
Practical improvements to simultaneous computation of multi-view geometry and radial lens distortion
Resumo:
This paper discusses practical issues related to the use of the division model for lens distortion in multi-view geometry computation. A data normalisation strategy is presented, which has been absent from previous discussions on the topic. The convergence properties of the Rectangular Quadric Eigenvalue Problem solution for computing division model distortion are examined. It is shown that the existing method can require more than 1000 iterations when dealing with severe distortion. A method is presented for accelerating convergence to less than 10 iterations for any amount of distortion. The new method is shown to produce equivalent or better results than the existing method with up to two orders of magnitude reduction in iterations. Through detailed simulation it is found that the number of data points used to compute geometry and lens distortion has a strong influence on convergence speed and solution accuracy. It is recommended that more than the minimal number of data points be used when computing geometry using a robust estimator such as RANSAC. Adding two to four extra samples improves the convergence rate and accuracy sufficiently to compensate for the increased number of samples required by the RANSAC process.
Resumo:
Objective: To comprehensively measure the burden of hepatitis B, liver cirrhosis and liver cancer in Shandong province, using disability-adjusted life years (DALYs) to estimate the disease burden attribute to hepatitis B virus (HBV)infection. Methods: Based on the mortality data of hepatitis B, liver cirrhosis and liver cancer derived from the third National Sampling Retrospective Survey for Causes of Death during 2004 and 2005, the incidence data of hepatitis B and the prevalence and the disability weights of liver cancer gained from the Shandong Cancer Prevalence Sampling Survey in 2007, we calculated the years of life lost (YLLs), years lived with disability (YLDs) and DALYs of three diseases following the procedures developed for the global burden of disease (GBD) study to ensure the comparability. Results: The total burden for hepatitis B, liver cirrhosis and liver cancer were 211 616 (39 377 YLLs and 172 239 YLDs), 16 783 (13 497 YLLs and 3286 YLDs) and 247 795 (240 236 YLLs and 7559 YLDs) DALYs in 2005 respectively, and men were 2.19, 2.36 and 3.16 times as that for women, respectively in Shandong province. The burden for hepatitis B was mainly because of disability (81.39%). However, most burden on liver cirrhosis and liver cancer were due to premature death (80.42% and 96.95%). The burden of each patient related to hepatitis B, liver cirrhosis and liver cancer were 4.8, 13.73 and 11.11 respectively. Conclusion: Hepatitis B, liver cirrhosis and liver cancer caused considerable burden to the people living in Shandong province, indicating that the control of hepatitis B virus infection would bring huge potential benefits.
Resumo:
As the international community struggles to find a cost-effective solution to mitigate climate change and reduce greenhouse gas emissions, carbon capture and storage (CCS) has emerged as a project mechanism with the potential to assist in transitioning society towards its low carbon future. Being a politically attractive option, legal regimes to promote and approve CCS have proceeded at an accelerated pace in multiple jurisdictions including the European Union and Australia. This acceleration and emphasis on the swift commercial deployment of CCS projects has left the legal community in the undesirable position of having to advise on the strengths and weaknesses of the key features of these regimes once they have been passed and become operational. This is an area where environmental law principles are tested to their very limit. On the one hand, implementation of this new technology should proceed in a precautionary manner to avoid adverse impacts on the atmosphere, local community and broader environment. On the other hand, excessive regulatory restrictions will stifle innovation and act as a barrier to the swift deployment of CCS projects around the world. Finding the balance between precaution and innovation is no easy feat. This is an area where lawyers, academics, regulators and industry representatives can benefit from the sharing of collective experiences, both positive and negative, across the jurisdictions. This exemplary book appears to have been collated with this philosophy in mind and provides an insightful addition to the global dialogue on establishing effective national and international regimes for the implementation of CCS projects...
Resumo:
As civil infrastructures such as bridges age, there is a concern for safety and a need for cost-effective and reliable monitoring tool. Different diagnostic techniques are available nowadays for structural health monitoring (SHM) of bridges. Acoustic emission is one such technique with potential of predicting failure. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). AEtechnique involves recording the stress waves bymeans of sensors and subsequent analysis of the recorded signals,which then convey information about the nature of the source. AE can be used as a local SHM technique to monitor specific regions with visible presence of cracks or crack prone areas such as welded regions and joints with bolted connection or as a global technique to monitor the whole structure. Strength of AE technique lies in its ability to detect active crack activity, thus helping in prioritising maintenance work by helping focus on active cracks rather than dormant cracks. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. One is the generation of large amount of data during the testing; hence an effective data analysis and management is necessary, especially for long term monitoring uses. Complications also arise as a number of spurious sources can giveAEsignals, therefore, different source discrimination strategies are necessary to identify genuine signals from spurious ones. Another major challenge is the quantification of damage level by appropriate analysis of data. Intensity analysis using severity and historic indices as well as b-value analysis are some important methods and will be discussed and applied for analysis of laboratory experimental data in this paper.
Resumo:
Acoustic emission (AE) analysis is one of the several diagnostic techniques available nowadays for structural health monitoring (SHM) of engineering structures. Some of its advantages over other techniques include high sensitivity to crack growth and capability of monitoring a structure in real time. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). In AE technique, these stress waves are recorded by means of suitable sensors placed on the surface of a structure. Recorded signals are subsequently analysed to gather information about the nature of the source. By enabling early detection of crack growth, AE technique helps in planning timely retrofitting or other maintenance jobs or even replacement of the structure if required. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. Large amount of data is generated during AE testing, hence effective data analysis is necessary, especially for long term monitoring uses. Appropriate analysis of AE data for quantification of damage level is an area that has received considerable attention. Various approaches available for damage quantification for severity assessment are discussed in this paper, with special focus on civil infrastructure such as bridges. One method called improved b-value analysis is used to analyse data collected from laboratory testing.
Resumo:
The time consuming and labour intensive task of identifying individuals in surveillance video is often challenged by poor resolution and the sheer volume of stored video. Faces or identifying marks such as tattoos are often too coarse for direct matching by machine or human vision. Object tracking and super-resolution can then be combined to facilitate the automated detection and enhancement of areas of interest. The object tracking process enables the automatic detection of people of interest, greatly reducing the amount of data for super-resolution. Smaller regions such as faces can also be tracked. A number of instances of such regions can then be utilized to obtain a super-resolved version for matching. Performance improvement from super-resolution is demonstrated using a face verification task. It is shown that there is a consistent improvement of approximately 7% in verification accuracy, using both Eigenface and Elastic Bunch Graph Matching approaches for automatic face verification, starting from faces with an eye to eye distance of 14 pixels. Visual improvement in image fidelity from super-resolved images over low-resolution and interpolated images is demonstrated on a small database. Current research and future directions in this area are also summarized.
Resumo:
This study examined the potential for Fe mobilization and greenhouse gas (GHG, e.g. CO2, and CH4) evolution in SEQ soils associated with a range of plantation forestry practices and water-logged conditions. Intact, 30-cm-deep soil cores collected from representative sites were saturated and incubated for 35 days in the laboratory, with leachate and headspace gas samples periodically collected. Minimal Fe dissolution was observed in well-drained sand soils associated with mature, first-rotation Pinus and organic Fe complexation, whereas progressive Fe dissolution occurred over 14 days in clear-felled and replanted Pinus soils with low organic matter and non-crystalline Fe fractions. Both CO2 and CH4 effluxes were relatively lower in clear-felled and replanted soils compared with mature, first-rotation Pinus soils, despite the lack of statistically significant variations in total GHG effluxes associated with different forestry practices. Fe dissolution and GHG evolution in low-lying, water-logged soils adjacent to riparian and estuarine, native-vegetation buffer zones were impacted by mineral and physical soil properties. Highest levels of dissolved Fe and GHG effluxes resulted from saturation of riparian loam soils with high Fe and clay content, as well as abundant organic material and Fe-metabolizing bacteria. Results indicate Pinus forestry practices such as clear-felling and replanting may elevate Fe mobilization while decreasing CO2 and CH4 emissions from well-drained, SEQ plantation soils upon heavy flooding. Prolonged water-logging accelerates bacterially mediated Fe cycling in low-lying, clay-rich soils, leading to substantial Fe dissolution, organic matter mineralization, and CH4 production in riparian native-vegetation buffer zones.
Resumo:
In this study we have found that NMR detectability of 39K in rat thigh muscle may be substantially higher (up to 100% oftotal tissue potassium) than values previously reported of around 40%. The signal was found to consist of two superimposed components, one broad and one narrow, of approximately equal area. Investigations involving improvements in spectral parameters such as signal-to-noise ratio and baseline roll, together with computer simulations of spectra, show that the quality of the spectra has a major effect on the amount of signal detected, which is largely due to the loss of detectability of the broad signal component. In particular, lower-field spectrometers using conventional probes and detection methods generally have poorer signal-to-noise and worse baseline roll artifacts, which make detection of a broad component of the muscle signal difficult.
Resumo:
The temporal variations in CO2, CH4 and N2O fluxes were measured over two consecutive years from February 2007 to March 2009 from a subtropical rainforest in south-eastern Queensland, Australia, using an automated sampling system. A concurrent study using an additional 30 manual chambers examined the spatial variability of emissions distributed across three nearby remnant rainforest sites with similar vegetation and climatic conditions. Interannual variation in fluxes of all gases over the 2 years was minimal, despite large discrepancies in rainfall, whereas a pronounced seasonal variation could only be observed for CO2 fluxes. High infiltration, drainage and subsequent high soil aeration under the rainforest limited N2O loss while promoting substantial CH4 uptake. The average annual N2O loss of 0.5 ± 0.1 kg N2O-N ha−1 over the 2-year measurement period was at the lower end of reported fluxes from rainforest soils. The rainforest soil functioned as a sink for atmospheric CH4 throughout the entire 2-year period, despite periods of substantial rainfall. A clear linear correlation between soil moisture and CH4 uptake was found. Rates of uptake ranged from greater than 15 g CH4-C ha−1 day−1 during extended dry periods to less than 2–5 g CH4-C ha−1 day−1 when soil water content was high. The calculated annual CH4 uptake at the site was 3.65 kg CH4-C ha−1 yr−1. This is amongst the highest reported for rainforest systems, reiterating the ability of aerated subtropical rainforests to act as substantial sinks of CH4. The spatial study showed N2O fluxes almost eight times higher, and CH4 uptake reduced by over one-third, as clay content of the rainforest soil increased from 12% to more than 23%. This demonstrates that for some rainforest ecosystems, soil texture and related water infiltration and drainage capacity constraints may play a more important role in controlling fluxes than either vegetation or seasonal variability
Resumo:
Timed-release cryptography addresses the problem of “sending messages into the future”: information is encrypted so that it can only be decrypted after a certain amount of time, either (a) with the help of a trusted third party time server, or (b) after a party performs the required number of sequential operations. We generalise the latter case to what we call effort-release public key encryption (ER-PKE), where only the party holding the private key corresponding to the public key can decrypt, and only after performing a certain amount of computation which may or may not be parallelisable. Effort-release PKE generalises both the sequential-operation-based timed-release encryption of Rivest, Shamir, and Wagner, and also the encapsulated key escrow techniques of Bellare and Goldwasser. We give a generic construction for ER-PKE based on the use of moderately hard computational problems called puzzles. Our approach extends the KEM/DEM framework for public key encryption by introducing a difficulty notion for KEMs which results in effort-release PKE. When the puzzle used in our generic construction is non-parallelisable, we recover timed-release cryptography, with the addition that only the designated receiver (in the public key setting) can decrypt.