978 resultados para Mechanical Measurements
Resumo:
Problems associated with processing whole sugarcane crop can be minimised by removing impurities during the clarification stage. As a first step, it is important to understand the colloidal chemistry of juice particles on a molecular level to assist development of strategies for effective clarification performance. This paper presents the composition and surface characteristics of colloidal particles originating from various juice types by using scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX), X-ray photoelectron spectroscopy(XPS) and zeta potential measurements. The composition and surface characteristics of colloidal juice particles are reported. The results indicate that there are three types of colloidal particles present, viz. an aluminosilicatecompound, silica and iron oxide, with the latter two being abundant. Proteins, polysaccharides and organic acids were identified on the surface of particles in juice. The overall particle charge varies from -2 mV to -6 mV. In comparison to juice expressed from burnt cane, the zeta potential values were more negative with juice particles originating from whole crop. This in part explains why these juices are difficult to clarify.
Resumo:
There is a growing interest in the use of megavoltage cone-beam computed tomography (MV CBCT) data for radiotherapy treatment planning. To calculate accurate dose distributions, knowledge of the electron density (ED) of the tissues being irradiated is required. In the case of MV CBCT, it is necessary to determine a calibration-relating CT number to ED, utilizing the photon beam produced for MV CBCT. A number of different parameters can affect this calibration. This study was undertaken on the Siemens MV CBCT system, MVision, to evaluate the effect of the following parameters on the reconstructed CT pixel value to ED calibration: the number of monitor units (MUs) used (5, 8, 15 and 60 MUs), the image reconstruction filter (head and neck, and pelvis), reconstruction matrix size (256 by 256 and 512 by 512), and the addition of extra solid water surrounding the ED phantom. A Gammex electron density CT phantom containing EDs from 0.292 to 1.707 was imaged under each of these conditions. The linear relationship between MV CBCT pixel value and ED was demonstrated for all MU settings and over the range of EDs. Changes in MU number did not dramatically alter the MV CBCT ED calibration. The use of different reconstruction filters was found to affect the MV CBCT ED calibration, as was the addition of solid water surrounding the phantom. Dose distributions from treatment plans calculated with simulated image data from a 15 MU head and neck reconstruction filter MV CBCT image and a MV CBCT ED calibration curve from the image data parameters and a 15 MU pelvis reconstruction filter showed small and clinically insignificant differences. Thus, the use of a single MV CBCT ED calibration curve is unlikely to result in any clinical differences. However, to ensure minimal uncertainties in dose reporting, MV CBCT ED calibration measurements could be carried out using parameter-specific calibration measurements.
Resumo:
OBJECTIVES: To examine the effect of thermal agents on the range of movement (ROM) and mechanical properties in soft tissue and to discuss their clinical relevance. DATA SOURCES: Electronic databases (Cochrane Central Register of Controlled Trials, MEDLINE, and EMBASE) were searched from their earliest available record up to May 2011 using Medical Subjects Headings and key words. We also undertook related articles searches and read reference lists of all incoming articles. STUDY SELECTION: Studies involving human participants describing the effects of thermal interventions on ROM and/or mechanical properties in soft tissue. Two reviewers independently screened studies against eligibility criteria. DATA EXTRACTION: Data were extracted independently by 2 review authors using a customized form. Methodologic quality was also assessed by 2 authors independently, using the Cochrane risk of bias tool. DATA SYNTHESIS: Thirty-six studies, comprising a total of 1301 healthy participants, satisfied the inclusion criteria. There was a high risk of bias across all studies. Meta-analyses were not undertaken because of clinical heterogeneity; however, effect sizes were calculated. There were conflicting data on the effect of cold on joint ROM, accessory joint movement, and passive stiffness. There was limited evidence to determine whether acute cold applications enhance the effects of stretching, and further evidence is required. There was evidence that heat increases ROM, and a combination of heat and stretching is more effective than stretching alone. CONCLUSIONS: Heat is an effective adjunct to developmental and therapeutic stretching techniques and should be the treatment of choice for enhancing ROM in a clinical or sporting setting. The effects of heat or ice on other important mechanical properties (eg, passive stiffness) remain equivocal and should be the focus of future study.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Nanoscale MgO powder was synthesized from magnesite ore by a wet chemical method. Acid dissolution was used to obtain a solution from which magnesium containing complexes were precipitated by either oxalic acid or ammonium hydroxide, The transformation of precipitates to the oxide was monitored by thermal analysis and XRD and the transformed powders were studied by electron microscopy. The MgO powders were added as dopants to Bi2SrCa2CuO8 powders and high temperature superconductor thick films were deposited on silver. Addition of suitable MgO powder resulted in increase of critical current density, J(c), from 8,900 Acm(-2) to 13,900 Acm(-2) measured at 77 K and 0 T. The effect of MgO addition was evaluated by XRD, electron microscopy and critical current density measurements. (C) 1998 Elsevier Science B.V.
Resumo:
Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the “gold standard” for predicting dose deposition in the patient. In this study, software has been developed that enables the transfer of treatment plan information from the treatment planning system to a Monte Carlo dose calculation engine. A database of commissioned linear accelerator models (Elekta Precise and Varian 2100CD at various energies) has been developed using the EGSnrc/BEAMnrc Monte Carlo suite. Planned beam descriptions and CT images can be exported from the treatment planning system using the DICOM framework. The information in these files is combined with an appropriate linear accelerator model to allow the accurate calculation of the radiation field incident on a modelled patient geometry. The Monte Carlo dose calculation results are combined according to the monitor units specified in the exported plan. The result is a 3D dose distribution that could be used to verify treatment planning system calculations. The software, MCDTK (Monte Carlo Dicom ToolKit), has been developed in the Java programming language and produces BEAMnrc and DOSXYZnrc input files, ready for submission on a high-performance computing cluster. The code has been tested with the Eclipse (Varian Medical Systems), Oncentra MasterPlan (Nucletron B.V.) and Pinnacle3 (Philips Medical Systems) planning systems. In this study the software was validated against measurements in homogenous and heterogeneous phantoms. Monte Carlo models are commissioned through comparison with quality assurance measurements made using a large square field incident on a homogenous volume of water. This study aims to provide a valuable confirmation that Monte Carlo calculations match experimental measurements for complex fields and heterogeneous media.
Resumo:
Herein the mechanical properties of graphene, including Young’s modulus, fracture stress and fracture strain have been investigated by molecular dynamics simulations. The simulation results show that the mechanical properties of graphene are sensitive to the temperature changes but insensitive to the layer numbers in the multilayer graphene. Increasing temperature exerts adverse and significant effects on the mechanical properties of graphene. However, the adverse effect produced by the increasing layer number is marginal. On the other hand, isotope substitutions in graphene play a negligible role in modifying the mechanical properties of graphene.
Resumo:
There are many continuum mechanical models have been developed such as liquid drop models, solid models, and so on for single living cell biomechanics studies. However, these models do not give a fully approach to exhibit a clear understanding of the behaviour of single living cells such as swelling behaviour, drag effect, etc. Hence, the porohyperelastic (PHE) model which can capture those aspects would be a good candidature to study cells behaviour (e.g. chondrocytes in this study). In this research, an FEM model of single chondrocyte cell will be developed by using this PHE model to simulate Atomic Force Microscopy (AFM) experimental results with the variation of strain rate. This material model will be compared with viscoelastic model to demonstrate the advantages of PHE model. The results have shown that the maximum value of force applied of PHE model is lower at lower strain rates. This is because the mobile fluid does not have enough time to exude in case of very high strain rate and also due to the lower permeability of the membrane than that of the protoplasm of chondrocyte. This behavior is barely observed in viscoelastic model. Thus, PHE model is the better model for cell biomechanics studies.
Resumo:
None of currently used tonometers produce estimated IOP values that are free of errors. Measurement incredibility arises from indirect measurement of corneal deformation and the fact that pressure calculations are based on population averaged parameters of anterior segment. Reliable IOP values are crucial for understanding and monitoring of number of eye pathologies e.g. glaucoma. We have combined high speed swept source OCT with air-puff chamber. System provides direct measurement of deformation of cornea and anterior surface of the lens. This paper describes in details the performance of air-puff ssOCT instrument. We present different approaches of data presentation and analysis. Changes in deformation amplitude appears to be good indicator of IOP changes. However, it seems that in order to provide accurate intraocular pressure values an additional information on corneal biomechanics is necessary. We believe that such information could be extracted from data provided by air-puff ssOCT.
Resumo:
Background: Hyperpolarised helium MRI (He3 MRI) is a new technique that enables imaging of the air distribution within the lungs. This allows accurate determination of the ventilation distribution in vivo. The technique has the disadvantages of requiring an expensive helium isotope, complex apparatus and moving the patient to a compatible MRI scanner. Electrical impedance tomography (EIT) a non-invasive bedside technique that allows constant monitoring of lung impedance, which is dependent on changes in air space capacity in the lung. We have used He3MRI measurements of ventilation distribution as the gold standard for assessment of EIT. Methods: Seven rats were ventilated in supine, prone, left and right lateral position with 70% helium/30% oxygen for EIT measurements and pure helium for He3 MRI. The same ventilator and settings were used for both measurements. Image dimensions, geometric centre and global in homogeneity index were calculated. Results: EIT images were smaller and of lower resolution and contained less anatomical detail than those from He3 MRI. However, both methods could measure positional induced changes in lung ventilation, as assessed by the geometric centre. The global in homogeneity index were comparable between the techniques. Conclusion: EIT is a suitable technique for monitoring ventilation distribution and inhomgeneity as assessed by comparison with He3 MRI.
Resumo:
In this work, the thermal expansion properties of carbon nanotube (CNT)-reinforced nanocomposites with CNT content ranging from 1 to 15 wt% were evaluated using a multi-scale numerical approach, in which the effects of two parameters, i.e., temperature and CNT content, were investigated extensively. For all CNT contents, the obtained results clearly revealed that within a wide low-temperature range (30°C ~ 62°C), thermal contraction is observed, while thermal expansion occurs in a high-temperature range (62°C ~ 120°C). It was found that at any specified CNT content, the thermal expansion properties vary with temperature - as temperature increases, the thermal expansion rate increases linearly. However, at a specified temperature, the absolute value of the thermal expansion rate decreases nonlinearly as the CNT content increases. Moreover, the results provided by the present multi-scale numerical model were in good agreement with those obtained from the corresponding theoretical analyses and experimental measurements in this work, which indicates that this multi-scale numerical approach provides a powerful tool to evaluate the thermal expansion properties of any type of CNT/polymer nanocomposites and therefore promotes the understanding on the thermal behaviors of CNT/polymer nanocomposites for their applications in temperature sensors, nanoelectronics devices, etc.
Resumo:
Graphene, one of the allotropes (diamond, carbon nanotube, and fullerene) of element carbon, is a monolayer of honeycomb lattice of carbon atoms, which was discovered in 2004. The Nobel Prize in Physics 2010 was awarded to Andre Geim and Konstantin Novoselov for their ground breaking work on the two-dimensional (2D) graphene [1]. Since its discovery, the research communities have shown a lot of interest in this novel material owing to its intriguing electrical, mechanical and thermal properties. It has been confirmed that grapheme possesses very peculiar electrical properties such as anomalous quantum hall effect, and high electron mobility at room temperature (250000 cm2/Vs). Graphene also has exceptional mechanical properties. It is one of the stiffest (modulus ~1 TPa) and strongest (strength ~100 GPa) materials. In addition, it has exceptional thermal conductivity (5000 Wm-1K-1). Due to these exceptional properties, graphene has demonstrated its potential for broad applications in micro and nano devices, various sensors, electrodes, solar cells and energy storage devices and nanocomposites. In particular, the excellent mechanical properties of graphene make it more attractive for development next generation nanocomposites and hybrid materials...
Resumo:
Particulate matter is common in our environment and has been linked to human health problems particularly in the ultrafine size range. A range of chemical species have been associated with particulate matter and of special concern are the hazardous chemicals that can accentuate health problems. If the sources of such particles can be identified then strategies can be developed for the reduction of air pollution and consequently, the improvement of the quality of life. In this investigation, particle number size distribution data and the concentrations of chemical species were obtained at two sites in Brisbane, Australia. Source apportionment was used to determine the sources (or factors) responsible for the particle size distribution data. The apportionment was performed by Positive Matrix Factorisation (PMF) and Principal Component Analysis/Absolute Principal Component Scores (PCA/APCS), and the results were compared with information from the gaseous chemical composition analysis. Although PCA/APCS resolved more sources, the results of the PMF analysis appear to be more reliable. Six common sources identified by both methods include: traffic 1, traffic 2, local traffic, biomass burning, and two unassigned factors. Thus motor vehicle related activities had the most impact on the data with the average contribution from nearly all sources to the measured concentrations higher during peak traffic hours and weekdays. Further analyses incorporated the meteorological measurements into the PMF results to determine the direction of the sources relative to the measurement sites, and this indicated that traffic on the nearby road and intersection was responsible for most of the factors. The described methodology which utilised a combination of three types of data related to particulate matter to determine the sources could assist future development of particle emission control and reduction strategies.
Resumo:
This paper presents a method for investigating ship emissions, the plume capture and analysis system (PCAS), and its application in measuring airborne pollutant emission factors (EFs) and particle size distributions. The current investigation was conducted in situ, aboard two dredgers (Amity: a cutter suction dredger and Brisbane: a hopper suction dredger) but the PCAS is also capable of performing such measurements remotely at a distant point within the plume. EFs were measured relative to the fuel consumption using the fuel combustion derived plume CO2. All plume measurements were corrected by subtracting background concentrations sampled regularly from upwind of the stacks. Each measurement typically took 6 minutes to complete and during one day, 40 to 50 measurements were possible. The relationship between the EFs and plume sample dilution was examined to determine the plume dilution range over which the technique could deliver consistent results when measuring EFs for particle number (PN), NOx, SO2, and PM2.5 within a targeted dilution factor range of 50-1000 suitable for remote sampling. The EFs for NOx, SO2, and PM2.5 were found to be independent of dilution, for dilution factors within that range. The EF measurement for PN was corrected for coagulation losses by applying a time dependant particle loss correction to the particle number concentration data. For the Amity, the EF ranges were PN: 2.2 - 9.6 × 1015 (kg-fuel)-1; NOx: 35-72 g(NO2).(kg-fuel)-1, SO2 0.6 - 1.1 g(SO2).(kg-fuel)-1and PM2.5: 0.7 – 6.1 g(PM2.5).(kg-fuel)-1. For the Brisbane they were PN: 1.0 – 1.5 x 1016 (kg-fuel)-1, NOx: 3.4 – 8.0 g(NO2).(kg-fuel)-1, SO2: 1.3 – 1.7 g(SO2).(kg-fuel)-1 and PM2.5: 1.2 – 5.6 g(PM2.5).(kg-fuel)-1. The results are discussed in terms of the operating conditions of the vessels’ engines. Particle number emission factors as a function of size as well as the count median diameter (CMD), and geometric standard deviation of the size distributions are provided. The size distributions were found to be consistently uni-modal in the range below 500 nm, and this mode was within the accumulation mode range for both vessels. The representative CMDs for the various activities performed by the dredgers ranged from 94-131 nm in the case of the Amity, and 58-80 nm for the Brisbane. A strong inverse relationship between CMD and EF(PN) was observed.