944 resultados para Simple methods
Resumo:
Background Large segmental defects in bone do not heal well and present clinical challenges. This study investigated modulation of the mechanical environment as a means of improving bone healing in the presence of bone morphogenetic protein (BMP)-2. Although the influence of mechanical forces on the healing of fractures is well established, no previous studies, to our knowledge, have described their influence on the healing of large segmental defects. We hypothesized that bone-healing would be improved by initial, low-stiffness fixation of the defect, followed by high-stiffness fixation during the healing process. We call this reverse dynamization. Methods A rat model of a critical-sized femoral defect was used. External fixators were constructed to provide different degrees of stiffness and, importantly, the ability to change stiffness during the healing process in vivo. Healing of the critical-sized defects was initiated by the implantation of 11 mg of recombinant human BMP (rhBMP)-2 on a collagen sponge. Groups of rats receiving BMP-2 were allowed to heal with low, medium, and high-stiffness fixators, as well as under conditions of reverse dynamization, in which the stiffness was changed from low to high at two weeks. Healing was assessed at eight weeks with use of radiographs, histological analysis, microcomputed tomography, dual x-ray absorptiometry, and mechanical testing. Results Under constant stiffness, the low-stiffness fixator produced the best healing after eight weeks. However, reverse dynamization provided considerable improvement, resulting in a marked acceleration of the healing process by all of the criteria of this study. The histological data suggest that this was the result of intramembranous, rather than endochondral, ossification. Conclusions Reverse dynamization accelerated healing in the presence of BMP-2 in the rat femur and is worthy of further investigation as a means of improving the healing of large segmental bone defects. Clinical Relevance These data provide the basis of a novel, simple, and inexpensive way to improve the healing of critical-sized defects in long bones. Reverse dynamization may also be applicable to other circumstances in which bonehealing is problematic.
Resumo:
Cleaning of sugar mill evaporators is an expensive exercise. Identifying the scale components assists in determining which chemical cleaning agents would result in effective evaporator cleaning. The current methods (based on x-ray diffraction techniques, ion exchange/high performance liquid chromatography and thermogravimetry/differential thermal analysis) used for scale characterisation are difficult, time consuming and expensive, and cannot be performed in a conventional analytical laboratory or by mill staff. The present study has examined the use of simple descriptor tests for the characterisation of Australian sugar mill evaporator scales. Scale samples were obtained from seven Australian sugar mill evaporators by mechanical means. The appearance, texture and colour of the scale were noted before the samples were characterised using x-ray fluorescence and x-ray powder diffraction to determine the compounds present. A number of commercial analytical test kits were used to determine the phosphate and calcium contents of scale samples. Dissolution experiments were carried out on the scale samples with selected cleaning agents to provide relevant information about the effect the cleaning agents have on different evaporator scales. Results have shown that by simply identifying the colour and the appearance of the scale, the elemental composition and knowing from which effect the scale originates, a prediction of the scale composition can be made. These descriptors and dissolution experiments on scale samples can be used to provide factory staff with an on-site rapid process to predict the most effective chemicals for chemical cleaning of the evaporators.
Resumo:
Several analytical methods for Dynamic System Optimum (DSO) assignment have been proposed but they are basically classified into two kinds. This chapter attempts to establish DSO by equilbrating the path dynamic marginal time (DMT). The authors analyze the path DMT for a single path with tandem bottlenecks and showed that the path DMT is not the simple summation of DMT associated with each bottleneck along the path. Next, the authors examined the DMT of several paths passing through a common bottleneck. It is shown that the externality at the bottleneck is shared by the paths in proportion to their demand from the current time until the queue vanishes. This share of the externality is caused by the departure rate shift under first in first out (FIFO) and the externality propagates to the downstream bottlenecks. However, the externalities propagates to the downstream are calculated out if downstream bottlenecks exist. Therefore, the authors concluded that the path DMT can be evaluated without considering the propagation of the externalities, but just as in the evaluation of the path DMT for a single path passing through a series of bottlenecks between the origin and destination. Based on the DMT analysis, the authors finally proposed a heuristic solution algorithm and verified it by comparing the numerical solution with the analytical one.
Resumo:
Filamentous fungi are important organisms for basic discovery, industry, and human health. Their natural growth environments are extremely variable, a fact reflected by the numerous methods developed for their isolation and cultivation. Fungal culture in the laboratory is usually carried out on agar plates, shake flasks, and bench top fermenters starting with an inoculum that typically features fungal spores. Here we discuss the most popular methods for the isolation and cultivation of filamentous fungi for various purposes with the emphasis on enzyme production and molecular microbiology.
Resumo:
Introduction Due to their high spatial resolution diodes are often used for small field relative output factor measurements. However, a field size specific correction factor [1] is required and corrects for diode detector over-response at small field sizes. A recent Monte Carlo based study has shown that it is possible to design a diode detector that produces measured relative output factors that are equivalent to those in water. This is accomplished by introducing an air gap at the upstream end of the diode [2]. The aim of this study was to physically construct this diode by placing an ‘air cap’ on the end of a commercially available diode (the PTW 60016 electron diode). The output factors subsequently measured with the new diode design were compared to current benchmark small field output factor measurements. Methods A water-tight ‘cap’ was constructed so that it could be placed over the upstream end of the diode. The cap was able to be offset from the end of the diode, thus creating an air gap. The air gap width was the same as the diode width (7 mm) and the thickness of the air gap could be varied. Output factor measurements were made using square field sizes of side length from 5 to 50 mm, using a 6 MV photon beam. The set of output factor measurements were repeated with the air gap thickness set to 0, 0.5, 1.0 and 1.5 mm. The optimal air gap thickness was found in a similar manner to that proposed by Charles et al. [2]. An IBA stereotactic field diode, corrected using Monte Carlo calculated kq,clin,kq,msr values [3] was used as the gold standard. Results The optimal air thickness required for the PTW 60016 electron diode was 1.0 mm. This was close to the Monte Carlo predicted value of 1.15 mm2. The sensitivity of the new diode design was independent of field size (kq,clin,kq,msr = 1.000 at all field sizes) to within 1 %. Discussion and conclusions The work of Charles et al. [2] has been proven experimentally. An existing commercial diode has been converted into a correction-less small field diode by the simple addition of an ‘air cap’. The method of applying a cap to create the new diode leads to the diode being dual purpose, as without the cap it is still an unmodified electron diode.
Resumo:
Light gauge steel roofing systems made of thin profiled roof sheeting and battens are used commonly in residential, industrial and commercial buildings. Their critical design load combination is that due to wind uplift forces that occur during high wind events such as tropical cyclones and thunderstorms. However, premature local failures at their screw connections have been a concern for many decades since cyclone Tracy that devastated Darwin in 1974. Extensive research that followed cyclone Tracy on the pull-through and pull-out failures of roof sheeting to batten connections has significantly improved the safety of roof sheeting. However, this has made the batten to rafter/truss connection the weakest, and recent wind damage investigations have shown the failures of these connections and the resulting loss of entire roof structures. Therefore an experimental research program using both small scale and full scale air-box tests is currently under way to investigate the pull-through failures of thin-walled steel battens under high wind uplift forces. Tests have demonstrated that occurrence of pull-through failures in the bottom flanges of steel batttens and the need to develop simple test and design methods as a function of many critical parameters such as steel batten geometry, thickness and grade, screw fastener sizes and other fastening details. This paper presents the details of local failures that occur in light fauge roofing systems, a review of the current design and test methods for steel battens and associated short comings, and the test results obtained to date on pull-through failures of battens from small scale and full scale tests. Finally, it proposes the use of suitable small scale test methods that can be used by both researchers and manufacturers of such screw-fastened light gauge steel batten systems.
Resumo:
In 2009, BJSM's first editorial argued that ‘Physical inactivity is the greatest public health problem of the 21st century’.1 The data supporting that claim have not yet been challenged. Now, 5 years after BJSM published its first dedicated ‘Physical Activity is Medicine’ theme issue (http://bjsm.bmj.com/content/43/1.toc) we are pleased to highlight 23 new contributions from six countries. This issue contains an analysis of the cost of physical inactivity from the US Centre for Diseases Control.2 We also report the cost-effectiveness of one particular physical activity intervention for adults.3
Resumo:
Tricalcium aluminate, hydrocalumite and residual lime have been identified as reversion contributing compounds after the seawater neutralisation of bauxite refinery residues. The formation of these compounds during the neutralisation process is dependent on the concentration of residual lime, pH and aluminate concentrations in the residue slurry. Therefore, the effect of calcium hydroxide (CaOH2) in bauxite refinery liquors was analysed and the degree of reversion monitored. This investigation found that the dissolution of tricalcium aluminate, hydrocalumite and CaOH2 caused reversion and continued to increase the pH of the neutralised residue until a state of equilibrium was reached at a solution pH of 10.5. The dissolution mechanism for each compound has been described and used to demonstrate the implications that this has on reversion in seawater neutralised Bayer liquor. This investigation describes the limiting factors for the dissolution and formation of these trigger compounds as well as confirming the formation of Bayer hydrotalcite (mixture of Mg6Al2(OH)16(CO32-,SO42-)•xH2O and Mg8Al2(OH)12(CO32-,SO42-)•xH2O) as the primary mechanism for reducing reversion during the neutralisation process. This knowledge then allowed for a simple but effective method (addition of magnesium chloride or increased seawater to Bayer liquor ratio) to be devised to reduce reversion occurring after the neutralisation of Bayer liquors. Both methods utilise the formation of Bayer hydrotalcite to permanently (stable in neutralised residue) remove hydroxyl (OH-) and aluminate (Al(OH)4-) ions from solution.
Resumo:
Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.
Resumo:
Protocols for bioassessment often relate changes in summary metrics that describe aspects of biotic assemblage structure and function to environmental stress. Biotic assessment using multimetric indices now forms the basis for setting regulatory standards for stream quality and a range of other goals related to water resource management in the USA and elsewhere. Biotic metrics are typically interpreted with reference to the expected natural state to evaluate whether a site is degraded. It is critical that natural variation in biotic metrics along environmental gradients is adequately accounted for, in order to quantify human disturbance-induced change. A common approach used in the IBI is to examine scatter plots of variation in a given metric along a single stream size surrogate and a fit a line (drawn by eye) to form the upper bound, and hence define the maximum likely value of a given metric in a site of a given environmental characteristic (termed the 'maximum species richness line' - MSRL). In this paper we examine whether the use of a single environmental descriptor and the MSRL is appropriate for defining the reference condition for a biotic metric (fish species richness) and for detecting human disturbance gradients in rivers of south-eastern Queensland, Australia. We compare the accuracy and precision of the MSRL approach based on single environmental predictors, with three regression-based prediction methods (Simple Linear Regression, Generalised Linear Modelling and Regression Tree modelling) that use (either singly or in combination) a set of landscape and local scale environmental variables as predictors of species richness. We compared the frequency of classification errors from each method against set biocriteria and contrast the ability of each method to accurately reflect human disturbance gradients at a large set of test sites. The results of this study suggest that the MSRL based upon variation in a single environmental descriptor could not accurately predict species richness at minimally disturbed sites when compared with SLR's based on equivalent environmental variables. Regression-based modelling incorporating multiple environmental variables as predictors more accurately explained natural variation in species richness than did simple models using single environmental predictors. Prediction error arising from the MSRL was substantially higher than for the regression methods and led to an increased frequency of Type I errors (incorrectly classing a site as disturbed). We suggest that problems with the MSRL arise from the inherent scoring procedure used and that it is limited to predicting variation in the dependent variable along a single environmental gradient.
Resumo:
1. Biodiversity, water quality and ecosystem processes in streams are known to be influenced by the terrestrial landscape over a range of spatial and temporal scales. Lumped attributes (i.e. per cent land use) are often used to characterise the condition of the catchment; however, they are not spatially explicit and do not account for the disproportionate influence of land located near the stream or connected by overland flow. 2. We compared seven landscape representation metrics to determine whether accounting for the spatial proximity and hydrological effects of land use can be used to account for additional variability in indicators of stream ecosystem health. The landscape metrics included the following: a lumped metric, four inverse-distance-weighted (IDW) metrics based on distance to the stream or survey site and two modified IDW metrics that also accounted for the level of hydrologic activity (HA-IDW). Ecosystem health data were obtained from the Ecological Health Monitoring Programme in Southeast Queensland, Australia and included measures of fish, invertebrates, physicochemistry and nutrients collected during two seasons over 4 years. Linear models were fitted to the stream indicators and landscape metrics, by season, and compared using an information-theoretic approach. 3. Although no single metric was most suitable for modelling all stream indicators, lumped metrics rarely performed as well as other metric types. Metrics based on proximity to the stream (IDW and HA-IDW) were more suitable for modelling fish indicators, while the HA-IDW metric based on proximity to the survey site generally outperformed others for invertebrates, irrespective of season. There was consistent support for metrics based on proximity to the survey site (IDW or HA-IDW) for all physicochemical indicators during the dry season, while a HA-IDW metric based on proximity to the stream was suitable for five of the six physicochemical indicators in the post-wet season. Only one nutrient indicator was tested and results showed that catchment area had a significant effect on the relationship between land use metrics and algal stable isotope ratios in both seasons. 4. Spatially explicit methods of landscape representation can clearly improve the predictive ability of many empirical models currently used to study the relationship between landscape, habitat and stream condition. A comparison of different metrics may provide clues about causal pathways and mechanistic processes behind correlative relationships and could be used to target restoration efforts strategically.
Resumo:
This thesis developed a new method for measuring extremely low amounts of organic and biological molecules, using Surface enhanced Raman Spectroscopy. This method has many potential applications, e.g. medical diagnosis, public health, food provenance, antidoping, forensics and homeland security. The method development used caffeine as the small molecule example, and erythropoietin (EPO) as the large molecule. This method is much more sensitive and specific than currently used methods; rapid, simple and cost effective. The method can be used to detect target molecules in beverages and biological fluids without the usual preparation steps.
Resumo:
Social contexts are possible information sources that can foster connections between mobile application users, but they are also minefields of privacy concerns and have great potential for misinterpretation. This research establishes a framework for guiding the design of context-aware mobile social applications from a socio-technical perspective. Agile ridesharing was chosen as the test domain for the research because its success relies upon effectively connecting people through mobile technologies.
Resumo:
A newspaper numbers game based on simple arithmetic relationships is discussed. Its potential to give students of elementary algebra practice in semi-ad hoc reasoning and to build general arithmetic reasoning skills is explored.