448 resultados para Area measurement.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrostatic discharge is the sudden and brief electric current that flashes between two objects at different voltages. This is a serious issue ranging in application from solid-state electronics to spectacular and dangerous lightning strikes (arc flashes). The research herein presents work on the experimental simulation and measurement of the energy in an electrostatic discharge. The energy released in these discharges has been linked to ignitions and burning in a number of documented disasters and can be enormously hazardous in many other industrial scenarios. Simulations of electrostatic discharges were designed to specifications by IEC standards. This is typically based on the residual voltage/charge on the discharge capacitor, whereas this research examines the voltage and current in the actual spark in order to obtain a more precise comparative measurement of the energy dissipated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Burkholderia pseudomallei, the causative agent of melioidosis is associated with soil. This study used a geographic information system (GIS) to determine the spatial distribution of clinical cases of melioidosis in the endemic suburban region of Townsville in Australia. A total of 65 cases over the period 1996–2008 were plotted using residential address. Two distinct groupings were found. One was around the base of a hill in the city centre and the other followed the old course of a major waterway in the region. Both groups (accounting for 43 of the 65 cases examined) are in areas expected to have particularly wet topsoils following intense rainfall, due to soil type or landscape position.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The link between measured sub-saturated hygroscopicity and cloud activation potential of secondary organic aerosol particles produced by the chamber photo-oxidation of α-pinene in the presence or absence of ammonium sulphate seed aerosol was investigated using two models of varying complexity. A simple single hygroscopicity parameter model and a more complex model (incorporating surface effects) were used to assess the detail required to predict the cloud condensation nucleus (CCN) activity from the subsaturated water uptake. Sub-saturated water uptake measured by three hygroscopicity tandem differential mobility analyser (HTDMA) instruments was used to determine the water activity for use in the models. The predicted CCN activity was compared to the measured CCN activation potential using a continuous flow CCN counter. Reconciliation using the more complex model formulation with measured cloud activation could be achieved widely different assumed surface tension behavior of the growing droplet; this was entirely determined by the instrument used as the source of water activity data. This unreliable derivation of the water activity as a function of solute concentration from sub-saturated hygroscopicity data indicates a limitation in the use of such data in predicting cloud condensation nucleus behavior of particles with a significant organic fraction. Similarly, the ability of the simpler single parameter model to predict cloud activation behaviour was dependent on the instrument used to measure sub-saturated hygroscopicity and the relative humidity used to provide the model input. However, agreement was observed for inorganic salt solution particles, which were measured by all instruments in agreement with theory. The difference in HTDMA data from validated and extensively used instruments means that it cannot be stated with certainty the detail required to predict the CCN activity from sub-saturated hygroscopicity. In order to narrow the gap between measurements of hygroscopic growth and CCN activity the processes involved must be understood and the instrumentation extensively quality assured. It is impossible to say from the results presented here due to the differences in HTDMA data whether: i) Surface tension suppression occurs ii) Bulk to surface partitioning is important iii) The water activity coefficient changes significantly as a function of the solute concentration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work was to quantify exposure to particles emitted by wood-fired ovens in pizzerias. Overall, 15 microenvironments were chosen and analyzed in a 14-month experimental campaign. Particle number concentration and distribution were measured simultaneously using a Condensation Particle Counter (CPC), a Scanning Mobility Particle Sizer (SMPS), an Aerodynamic Particle Sizer (APS). The surface area and mass distributions and concentrations, as well as the estimation of lung deposition surface area and PM1 were evaluated using the SMPS-APS system with dosimetric models, by taking into account the presence of aggregates on the basis of the Idealized Aggregate (IA) theory. The fraction of inhaled particles deposited in the respiratory system and different fractions of particulate matter were also measured by means of a Nanoparticle Surface Area Monitor (NSAM) and a photometer (DustTrak DRX), respectively. In this way, supplementary data were obtained during the monitoring of trends inside the pizzerias. We found that surface area and PM1 particle concentrations in pizzerias can be very high, especially when compared to other critical microenvironments, such as the transport hubs. During pizza cooking under normal ventilation conditions, concentrations were found up to 74, 70 and 23 times higher than background levels for number, surface area and PM1, respectively. A key parameter is the oven shape factor, defined as the ratio between the size of the face opening in respect

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gibson and Tarrant discuss the range of inter-dependant factors needed to manage organisational resilience. Over the last few years there has been considerable interest in the idea of resilience across all areas of society. Like any new area or field this has produced a vast array of definitions, processes, management systems and measurement tools which together have clouded the concept of resilience. Many of us have forgotten that ultimately resilience is not just about ‘bouncing back from adversity’ but is more broadly concerned with adaptive capacity and how we better understand and address uncertainty in our internal and external environments. The basis of organisational resilience is a fundamental understanding and treatment of risk, particularly non-routine or disruption related risk. This paper presents a number of conceptual models of organisational resilience that we have developed to demonstrate the range of inter-dependant factors that need to be considered in the management of such risk. These conceptual models illustrate that effective resilience is built upon a range of different strategies that enhance both ‘hard’ and ‘soft’ organisational capabilities . They emphasise the concept that there is no quick fix, no single process, management system or software application that will create resilience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The neutron logging method has been widely used for field measurement of soil moisture content. This non-destructive method permitted the measurement of in-situ soil moisture content at various depths without the need for burying any sensor. Twenty-three sites located around regional Melbourne have been selected for long term monitoring of soil moisture content using neutron probe. Soil samples collected during the installation are used for site characterisation and neutron probe calibration purposes. A linear relationship is obtained between the corrected neutron probe reading and moisture content for both the individual and combined data from seven sites. It is observed that the liner relationship, developed using combined data, can be used for all sites with an average accuracy of about 80%. Monitoring of the variation of soil moisture content with depth in six months for two sites is presented in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Social enterprises are diverse in their mission, business structures and industry orientations. Like all businesses, social enterprises face a range of strategic and operational challenges and utilize a range of strategies to access resources in support of their venture. This exploratory study examined the strategic management issues faced by Australian social enterprises and the ways in which they respond to these. The research was based on a comprehensive literature review and semi-structured interviews with 11 representatives of eight social enterprises based in Victoria and Queensland. The sample included mature social enterprises and those within two years of start-up. In addition to the research report, the outputs of the project include a series of six short documentaries, which are available on YouTube at http://www.youtube.com/user/SocialEnterpriseQUT#p/u. The research reported on here suggests that social enterprises are sophisticated in utilizing processes of network bricolage (Baker et al. 2003) to mobilize resources in support of their goals. Access to network resources can be both enabling and constraining as social enterprises mature. In terms of the use of formal business planning strategies, all participating social enterprises had utilized these either at the outset or the point of maturation of their business operations. These planning activities were used to support internal operations, to provide a mechanism for managing collective entrepreneurship, and to communicate to external stakeholders about the legitimacy and performance of the social enterprises. Further research is required to assess the impacts of such planning activities, and the ways in which they are used over time. Business structures and governance arrangements varied amongst participating enterprises according to: mission and values; capital needs; and the experiences and culture of founding organizations and individuals. In different ways, participants indicated that business structures and governance arrangements are important ways of conferring legitimacy on social enterprise, by signifying responsible business practice and strong social purpose to both external and internal stakeholders. Almost all participants in the study described ongoing tensions in balancing social purpose and business objectives. It is not clear, however, whether these tensions were problematic (in the sense of eroding mission or business opportunities) or productive (in the sense of strengthening mission and business practices through iterative processes of reflection and action). Longitudinal research on the ways in which social enterprises negotiate mission fulfillment and business sustainability would enhance our knowledge in this area. Finally, despite growing emphasis on measuring social impact amongst institutions, including governments and philanthropy, that influence the operating environment of social enterprise, relatively little priority was placed on this activity. The participants in our study noted the complexities of effectively measuring social impact, as well as the operational difficulties of undertaking such measurement within the day to day realities of running small to medium businesses. It is clear that impact measurement remains a vexed issue for a number of our respondents. This study suggests that both the value and practicality of social impact measurement require further debate and critically informed evidence, if impact measurement is to benefit social enterprises and the communities they serve.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Training designed to support and strengthen higher-order mental abilities now often involves immersion in Virtual Reality where dangerous real world scenarios can be safely replicated. However despite the growing popularity of advanced training simulations, methods for evaluating their use rely heavily on subjective measures or analysis of final outcomes. Without dynamic, objective performance measures the outcome of training in terms of impact on cognitive skills and ability to transfer newly acquired skills to the real world is unknown. The relationship between affective intensity and cognitive learning provides a potential new approach to ensure the processing of cognitions which occur prior to final outcomes, such as problem-solving and decision-making, are adequately evaluated. This paper describes the technical aspects of pilot work recently undertaken to develop a new measurement tool designed to objectively track individual affect levels during simulation-based training.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of offsetting land against unavoidable disturbance of development sites in Queensland will benefit from a method that allows the best possible selection to be made of alternative lands. With site selection now advocated through a combination of Regional Ecosystem and Land Capability classifications state-wide, a case study has determined methods of assessing the functional lift – that is, measures of net environmental gain – of such action. Outcomes with potentially high functional lift are determined, that offer promise not only for endangered ecosystems but also for managing adjacent conservation reserves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microstructural (fabric, forces and composition) changes due to hydrocarbon contamination in a clay soil were studied using Scanning Electron Microscope (micro-fabric analysis), Atomic Force Microscope (forces measurement) and sedimentation bench test (particle size measurements). The non-polluted and polluted glacial till from north-eastern Poland (area of a fuel terminal) were used for the study. Electrostatic repelling forces for the polluted sample were much lower than for the non-polluted sample. In comparison to non-polluted sample, the polluted sample exhibited lower electric charge, attractive forces on approach and strong adhesion on retrieve. The results of the sedimentation tests indicate that clay particles form larger aggregates and settle out of the suspension rapidly in diesel oil. In non-polluted soil, the fabric is strongly aggregated – densely packed, dominate the face-to-face and edge-to-edge types of contacts, clay film tightly adheres to the surface of larger grains and interparticle pores are more common. In polluted soil, the clay matrix is less aggregated – loosely packed, dominate the edge-to-face types of contacts and inter-micro-aggregate pores are more frequent. Substantial differences were observed in the morphometric and geometrical parameters of pore space. The polluted soil micro-fabric proved to be more isotropic and less oriented than in non-polluted soil. The polluted soil, in which electrostatic forces were suppressed by hydrocarbon interaction, displays more open porosity and larger voids than non-polluted soil, which is characterized by occurrence of the strong electrostatic interaction between clay particles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Wireless Sensor Network (WSN) is a set of sensors that are integrated with a physical environment. These sensors are small in size, and capable of sensing physical phenomena and processing them. They communicate in a multihop manner, due to a short radio range, to form an Ad Hoc network capable of reporting network activities to a data collection sink. Recent advances in WSNs have led to several new promising applications, including habitat monitoring, military target tracking, natural disaster relief, and health monitoring. The current version of sensor node, such as MICA2, uses a 16 bit, 8 MHz Texas Instruments MSP430 micro-controller with only 10 KB RAM, 128 KB program space, 512 KB external ash memory to store measurement data, and is powered by two AA batteries. Due to these unique specifications and a lack of tamper-resistant hardware, devising security protocols for WSNs is complex. Previous studies show that data transmission consumes much more energy than computation. Data aggregation can greatly help to reduce this consumption by eliminating redundant data. However, aggregators are under the threat of various types of attacks. Among them, node compromise is usually considered as one of the most challenging for the security of WSNs. In a node compromise attack, an adversary physically tampers with a node in order to extract the cryptographic secrets. This attack can be very harmful depending on the security architecture of the network. For example, when an aggregator node is compromised, it is easy for the adversary to change the aggregation result and inject false data into the WSN. The contributions of this thesis to the area of secure data aggregation are manifold. We firstly define the security for data aggregation in WSNs. In contrast with existing secure data aggregation definitions, the proposed definition covers the unique characteristics that WSNs have. Secondly, we analyze the relationship between security services and adversarial models considered in existing secure data aggregation in order to provide a general framework of required security services. Thirdly, we analyze existing cryptographic-based and reputationbased secure data aggregation schemes. This analysis covers security services provided by these schemes and their robustness against attacks. Fourthly, we propose a robust reputationbased secure data aggregation scheme for WSNs. This scheme minimizes the use of heavy cryptographic mechanisms. The security advantages provided by this scheme are realized by integrating aggregation functionalities with: (i) a reputation system, (ii) an estimation theory, and (iii) a change detection mechanism. We have shown that this addition helps defend against most of the security attacks discussed in this thesis, including the On-Off attack. Finally, we propose a secure key management scheme in order to distribute essential pairwise and group keys among the sensor nodes. The design idea of the proposed scheme is the combination between Lamport's reverse hash chain as well as the usual hash chain to provide both past and future key secrecy. The proposal avoids the delivery of the whole value of a new group key for group key update; instead only the half of the value is transmitted from the network manager to the sensor nodes. This way, the compromise of a pairwise key alone does not lead to the compromise of the group key. The new pairwise key in our scheme is determined by Diffie-Hellman based key agreement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Organoclays were synthesised through ion exchange of a single surfactant for sodium ions, and characterised by a range of method including X-ray diffraction (XRD), BET, X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FT-IR), and transmission electron microscopy (TEM). The change in surface properties of montmorillonite and organoclays intercalated with the surfactant, tetradecyltrimethylammonium bromide (TDTMA) were determined using XRD through the change in basal spacing and the expansion occurred by the adsorbed p-nitrophenol. The changes of interlayer spacing were observed in TEM. In addition, the surface measurement such as specific surface area and pore volume was measured and calculated using BET method, this suggested the loaded surfactant is highly important to determine the sorption mechanism onto organoclays. The collected results of XPS provided the chemical composition of montmorillonite and organoclays, and the high-resolution XPS spectra offered the chemical states of prepared organoclays with binding energy. Using TGA and FT-IR, the confirmation of intercalated surfactant was investigated. The collected data from various techniques enable an understanding of the changes in structure and surface properties. This study is of importance to provide mechanisms for the adsorption of organic molecules, especially in contaminated environmental sites and polluted waters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION. Following anterior thoracoscopic instrumentation and fusion for the treatment of thoracic AIS, implant related complications have been reported as high as 20.8%. Currently the magnitudes of the forces applied to the spine during anterior scoliosis surgery are unknown. The aim of this study was to measure the segmental compressive forces applied during anterior single rod instrumentation in a series of adolescent idiopathic scoliosis patients. METHODS. A force transducer was designed, constructed and retrofitted to a surgical cable compression tool, routinely used to apply segmental compression during anterior scoliosis correction. Transducer output was continuously logged during the compression of each spinal joint, the output at completion converted to an applied compression force using calibration data. The angle between adjacent vertebral body screws was also measured on intra-operative frontal plane fluoroscope images taken both before and after each joint compression. The difference in angle between the two images was calculated as an estimate for the achieved correction at each spinal joint. RESULTS. Force measurements were obtained for 15 scoliosis patients (Aged 11-19 years) with single thoracic curves (Cobb angles 47˚- 67˚). In total, 95 spinal joints were instrumented. The average force applied for a single joint was 540 N (± 229 N)ranging between 88 N and 1018 N. Experimental error in the force measurement, determined from transducer calibration was ± 43 N. A trend for higher forces applied at joints close to the apex of the scoliosis was observed. The average joint correction angle measured by fluoroscope imaging was 4.8˚ (±2.6˚, range 0˚-12.6˚). CONCLUSION. This study has quantified in-vivo, the intra-operative correction forces applied by the surgeon during anterior single rod instrumentation. This data provides a useful contribution towards an improved understanding of the biomechanics of scoliosis correction. In particular, this data will be used as input for developing patient-specific finite element simulations of scoliosis correction surgery.