460 resultados para Magnitude


Relevância:

10.00% 10.00%

Publicador:

Resumo:

From 27 January to 8 February during the summer of 2009, southern Australia experienced one of the nation‘s most severe heatwaves. Governments, councils, utilities, hospitals and emergency response organisations and the community were largely underprepared for an extreme event of this magnitude. This case study targets the experience and challenges faced by decision makers and policy makers and focuses on the major metropolitan areas affected by the heatwave — Melbourne and Adelaide. The study examines the 2009 heatwave‘s characteristics; its impacts (on human health, infrastructure and human services); the degree of adaptive capacity (vulnerability and resilience) of various sectors, communities and individuals; and the reactive responses of government and emergency and associated services and their effectiveness. Barriers and challenges to adaptation and increasing resilience are also identified and further areas for research are suggested. This study does not include details of the heatwave‘s effects beyond Victoria and South Australia, or its economic impacts, or of Victoria‘s 'Black Saturday‘ bushfires.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Through international agreement to the United Nations Framework Convention on Climate Change and the Kyoto Protocol the global community has acknowledged that climate change is a global problem and sought to achieve reductions in global emissions, within a sufficient timeframe, to avoid dangerous anthropogenic interference with the climate system. The sheer magnitude of emissions reductions required within such an urgent timeframe presents a challenge to conventional regulatory approaches both internationally and within Australia. The phenomenon of climate change is temporally and geographically challenging and it is scientifically complex and uncertain. The purpose of this paper is to analyse the current Australian legal response to climate change and to examine the legal measures which have been proposed to promote carbon trading, energy efficiency, renewable energy, and carbon sequestration initiatives across Australia. As this paper illustrates, the current Australian approach is clearly ineffective and the law as it stands overwhelmingly inadequate to address Australia’s emissions and meet the enormity of the challenges posed by climate change. Consequently, the government should look towards a more effective legal framework to achieve rapid and urgent transformations in the selection of energy sources, energy use and sequestration initiatives across the Australian community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increases in atmospheric concentrations of the greenhouse gases (GHGs) carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) due to human activities have been linked to climate change. GHG emissions from land use change and agriculture have been identified as significant contributors to both Australia’s and the global GHG budget. This is expected to increase over the coming decades as rates of agriculture intensification and land use change accelerate to support population growth and food production. Limited data exists on CO2, CH4 and N2O trace gas fluxes from subtropical or tropical soils and land uses. To develop effective mitigation strategies a full global warming potential (GWP) accounting methodology is required that includes emissions of the three primary greenhouse gases. Mitigation strategies that focus on one gas only can inadvertently increase emissions of another. For this reason, detailed inventories of GHGs from soils and vegetation under individual land uses are urgently required for subtropical Australia. This study aimed to quantify GHG emissions over two consecutive years from three major land uses; a well-established, unfertilized subtropical grass-legume pasture, a 30 year (lychee) orchard and a remnant subtropical Gallery rainforest, all located near Mooloolah, Queensland. GHG fluxes were measured using a combination of high resolution automated sampling, coarser spatial manual sampling and laboratory incubations. Comparison between the land uses revealed that land use change can have a substantial impact on the GWP on a landscape long after the deforestation event. The conversion of rainforest to agricultural land resulted in as much as a 17 fold increase in GWP, from 251 kg CO2 eq. ha-1 yr-1 in the rainforest to 889 kg CO2 eq. ha-1 yr-1 in the pasture to 2538 kg CO2 eq. ha-1 yr-1 in the lychee plantation. This increase resulted from altered N cycling and a reduction in the aerobic capacity of the soil in the pasture and lychee systems, enhancing denitrification and nitrification events, and reducing atmospheric CH4 uptake in the soil. High infiltration, drainage and subsequent soil aeration under the rainforest limited N2O loss, as well as promoting CH4 uptake of 11.2 g CH4-C ha-1 day-1. This was among the highest reported for rainforest systems, indicating that aerated subtropical rainforests can act as substantial sink of CH4. Interannual climatic variation resulted in significantly higher N2O emission from the pasture during 2008 (5.7 g N2O-N ha day) compared to 2007 (3.9 g N2O-N ha day), despite receiving nearly 500 mm less rainfall. Nitrous oxide emissions from the pasture were highest during the summer months and were highly episodic, related more to the magnitude and distribution of rain events rather than soil moisture alone. Mean N2O emissions from the lychee plantation increased from an average of 4.0 g N2O-N ha-1 day-1, to 19.8 g N2O-N ha-1 day-1 following a split application of N fertilizer (560 kg N ha-1, equivalent to 1 kg N tree-1). The timing of the split application was found to be critical to N2O emissions, with over twice as much lost following an application in spring (emission factor (EF): 1.79%) compared to autumn (EF: 0.91%). This was attributed to the hot and moist climatic conditions and a reduction in plant N uptake during the spring creating conditions conducive to N2O loss. These findings demonstrate that land use change in subtropical Australia can be a significant source of GHGs. Moreover, the study shows that modifying the timing of fertilizer application can be an efficient way of reducing GHG emissions from subtropical horticulture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the first time in human history, large volumes of spoken audio are being broadcast, made available on the internet, archived, and monitored for surveillance every day. New technologies are urgently required to unlock these vast and powerful stores of information. Spoken Term Detection (STD) systems provide access to speech collections by detecting individual occurrences of specified search terms. The aim of this work is to develop improved STD solutions based on phonetic indexing. In particular, this work aims to develop phonetic STD systems for applications that require open-vocabulary search, fast indexing and search speeds, and accurate term detection. Within this scope, novel contributions are made within two research themes, that is, accommodating phone recognition errors and, secondly, modelling uncertainty with probabilistic scores. A state-of-the-art Dynamic Match Lattice Spotting (DMLS) system is used to address the problem of accommodating phone recognition errors with approximate phone sequence matching. Extensive experimentation on the use of DMLS is carried out and a number of novel enhancements are developed that provide for faster indexing, faster search, and improved accuracy. Firstly, a novel comparison of methods for deriving a phone error cost model is presented to improve STD accuracy, resulting in up to a 33% improvement in the Figure of Merit. A method is also presented for drastically increasing the speed of DMLS search by at least an order of magnitude with no loss in search accuracy. An investigation is then presented of the effects of increasing indexing speed for DMLS, by using simpler modelling during phone decoding, with results highlighting the trade-off between indexing speed, search speed and search accuracy. The Figure of Merit is further improved by up to 25% using a novel proposal to utilise word-level language modelling during DMLS indexing. Analysis shows that this use of language modelling can, however, be unhelpful or even disadvantageous for terms with a very low language model probability. The DMLS approach to STD involves generating an index of phone sequences using phone recognition. An alternative approach to phonetic STD is also investigated that instead indexes probabilistic acoustic scores in the form of a posterior-feature matrix. A state-of-the-art system is described and its use for STD is explored through several experiments on spontaneous conversational telephone speech. A novel technique and framework is proposed for discriminatively training such a system to directly maximise the Figure of Merit. This results in a 13% improvement in the Figure of Merit on held-out data. The framework is also found to be particularly useful for index compression in conjunction with the proposed optimisation technique, providing for a substantial index compression factor in addition to an overall gain in the Figure of Merit. These contributions significantly advance the state-of-the-art in phonetic STD, by improving the utility of such systems in a wide range of applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the executive components of response organization and execution. Here we use the numerical Stroop paradigm (NSP) and ERPs to study possible executive interference in numerical processing tasks in 6–8-year-old children. In the NSP, the numerical magnitude of the digits is task-relevant and the physical size of the digits is task-irrelevant. We show that younger children are highly susceptible to interference from irrelevant physical information such as digit size, but that access to the numerical representation is almost as fast in young children as in adults. We argue that the developmental trajectories for executive function and numerical processing may act together to determine numerical development in young children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper sets out to examine from published literature and crash data analyses whether alcohol in bicycle crashes is an issue about which we should be concerned. It discusses factors that have the potential to increase the number of bicycle crashes in which alcohol is involved (such growth in the size and diversity of the cyclist population, and balance and coordination demands) and factors which may reduce the importance of alcohol in bicycle crashes (such as time of data factors and child riders). It also examines data availability issues that contribute to difficulties in determining the true magnitude of the issue. Methods: This paper reviews previous research and reports analyses of data from Queensland, Australia, that examine the role of alcohol in Police-reported road crashes. In Queensland it is an offence to ride a bicycle or drive a motor vehicle with a BAC exceeding 0.05% (or lower for novice and professional drivers). Results: In the five years 2003-2007, alcohol was reported as involved in 165 bicycle crashes (4%). The bicycle rider was coded as “under the influence” or “over the prescribed BAC limit” in 15 were single unit crashes (12%). In multi-vehicle bicycle crashes, alcohol involvement was reported for 16 cyclists (0.4%) and 110 operators of other vehicles (3%). Additional analyses including characteristics of the cyclist crashes involving alcohol and the importance of missing data will be discussed in the paper. Conclusion: The increase in participation in cycling and the vulnerability of cyclists to injuries support the need to examine the role of alcohol in bicycle crashes. Current data suggest that alcohol on the part of the vehicle driver is a larger concern than alcohol on the part of the cyclist, but improvements in data collection are needed before more precise conclusions can be drawn.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For several reasons, the Fourier phase domain is less favored than the magnitude domain in signal processing and modeling of speech. To correctly analyze the phase, several factors must be considered and compensated, including the effect of the step size, windowing function and other processing parameters. Building on a review of these factors, this paper investigates a spectral representation based on the Instantaneous Frequency Deviation, but in which the step size between processing frames is used in calculating phase changes, rather than the traditional single sample interval. Reflecting these longer intervals, the term delta-phase spectrum is used to distinguish this from instantaneous derivatives. Experiments show that mel-frequency cepstral coefficients features derived from the delta-phase spectrum (termed Mel-Frequency delta-phase features) can produce broadly similar performance to equivalent magnitude domain features for both voice activity detection and speaker recognition tasks. Further, it is shown that the fusion of the magnitude and phase representations yields performance benefits over either in isolation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large igneous provinces (LIPs) are sites of the most frequently recurring, largest volume basaltic and silicic eruptions in Earth history. These large-volume (N1000 km3 dense rock equivalent) and large-magnitude (NM8) eruptions produce areally extensive (104–105 km2) basaltic lava flow fields and silicic ignimbrites that are the main building blocks of LIPs. Available information on the largest eruptive units are primarily from the Columbia River and Deccan provinces for the dimensions of flood basalt eruptions, and the Paraná–Etendeka and Afro-Arabian provinces for the silicic ignimbrite eruptions. In addition, three large-volume (675– 2000 km3) silicic lava flows have also been mapped out in the Proterozoic Gawler Range province (Australia), an interpreted LIP remnant. Magma volumes of N1000 km3 have also been emplaced as high-level basaltic and rhyolitic sills in LIPs. The data sets indicate comparable eruption magnitudes between the basaltic and silicic eruptions, but due to considerable volumes residing as co-ignimbrite ash deposits, the current volume constraints for the silicic ignimbrite eruptions may be considerably underestimated. Magma composition thus appears to be no barrier to the volume of magma emitted during an individual eruption. Despite this general similarity in magnitude, flood basaltic and silicic eruptions are very different in terms of eruption style, duration, intensity, vent configuration, and emplacement style. Flood basaltic eruptions are dominantly effusive and Hawaiian–Strombolian in style, with magma discharge rates of ~106–108 kg s−1 and eruption durations estimated at years to tens of years that emplace dominantly compound pahoehoe lava flow fields. Effusive and fissural eruptions have also emplaced some large-volume silicic lavas, but discharge rates are unknown, and may be up to an order of magnitude greater than those of flood basalt lava eruptions for emplacement to be on realistic time scales (b10 years). Most silicic eruptions, however, are moderately to highly explosive, producing co-current pyroclastic fountains (rarely Plinian) with discharge rates of 109– 1011 kg s−1 that emplace welded to rheomorphic ignimbrites. At present, durations for the large-magnitude silicic eruptions are unconstrained; at discharge rates of 109 kg s−1, equivalent to the peak of the 1991 Mt Pinatubo eruption, the largest silicic eruptions would take many months to evacuate N5000 km3 of magma. The generally simple deposit structure is more suggestive of short-duration (hours to days) and high intensity (~1011 kg s−1) eruptions, perhaps with hiatuses in some cases. These extreme discharge rates would be facilitated by multiple point, fissure and/or ring fracture venting of magma. Eruption frequencies are much elevated for large-magnitude eruptions of both magma types during LIP-forming episodes. However, in basaltdominated provinces (continental and ocean basin flood basalt provinces, oceanic plateaus, volcanic rifted margins), large magnitude (NM8) basaltic eruptions have much shorter recurrence intervals of 103–104 years, whereas similar magnitude silicic eruptions may have recurrence intervals of up to 105 years. The Paraná– Etendeka province was the site of at least nine NM8 silicic eruptions over an ~1 Myr period at ~132 Ma; a similar eruption frequency, although with a fewer number of silicic eruptions is also observed for the Afro- Arabian Province. The huge volumes of basaltic and silicic magma erupted in quick succession during LIP events raises several unresolved issues in terms of locus of magma generation and storage (if any) in the crust prior to eruption, and paths and rates of ascent from magma reservoirs to the surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cell based therapies for bone regeneration are an exciting emerging technology, but the availability of osteogenic cells is limited and an ideal cell source has not been identified. Amniotic fluid-derived stem (AFS) cells and bone-marrow derived mesenchymal stem cells (MSCs) were compared to determine their osteogenic differentiation capacity in both 2D and 3D environments. In 2D culture, the AFS cells produced more mineralized matrix but delayed peaks in osteogenic markers. Cells were also cultured on 3D scaffolds constructed of poly-e-caprolactone for 15 weeks. MSCs differentiated more quickly than AFS cells on 3D scaffolds, but mineralized matrix production slowed considerably after 5 weeks. In contrast, the rate of AFS cell mineralization continued to increase out to 15 weeks, at which time AFS constructs contained 5-fold more mineralized matrix than MSC constructs. Therefore, cell source should be taken into consideration when used for cell therapy, as the MSCs would be a good choice for immediate matrix production, but the AFS cells would continue robust mineralization for an extended period of time. This study demonstrates that stem cell source can dramatically influence the magnitude and rate of osteogenic differentiation in vitro.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Queensland Department of Main Roads uses Weigh-in-Motion (WiM) devices to covertly monitor (at highway speed) axle mass, axle configurations and speed of heavy vehicles on the road network. Such data is critical for the planning and design of the road network. Some of the data appears excessively variable. The current work considers the nature, magnitude and possible causes of WiM data variability. Over fifty possible causes of variation in WiM data have been identified in the literature. Data exploration has highlighted five basic types of variability specifically: ----- • cycling, both diurnal and annual;----- • consistent but unreasonable data;----- • data jumps;----- • variations between data from opposite sides of the one road; and ----- • non-systematic variations.----- This work is part of wider research into procedures to eliminate or mitigate the influence of WiM data variability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A review of the literature related to issues involved in irrigation induced agricultural development (IIAD) reveals that: (1) the magnitude, sensitivity and distribution of social welfare of IIAD is not fully analysed; (2) the impacts of excessive pesticide use on farmers’ health are not adequately explained; (3) no analysis estimates the relationship between farm level efficiency and overuse of agro-chemical inputs under imperfect markets; and (4) the method of incorporating groundwater extraction costs is misleading. This PhD thesis investigates these issues by using primary data, along with secondary data from Sri Lanka. The overall findings of the thesis can be summarised as follows. First, the thesis demonstrates that Sri Lanka has gained a positive welfare change as a result of introducing new irrigation technology. The change in the consumer surplus is Rs.48,236 million, while the change in the producer surplus is Rs. 14,274 millions between 1970 and 2006. The results also show that the long run benefits and costs of IIAD depend critically on the magnitude of the expansion of the irrigated area, as well as the competition faced by traditional farmers (agricultural crowding out effects). The traditional sector’s ability to compete with the modern sector depends on productivity improvements, reducing production costs and future structural changes (spillover effects). Second, the thesis findings on pesticides used for agriculture show that, on average, a farmer incurs a cost of approximately Rs. 590 to 800 per month during a typical cultivation period due to exposure to pesticides. It is shown that the value of average loss in earnings per farmer for the ‘hospitalised’ sample is Rs. 475 per month, while it is approximately Rs. 345 per month for the ‘general’ farmers group during a typical cultivation season. However, the average willingness to pay (WTP) to avoid exposure to pesticides is approximately Rs. 950 and Rs. 620 for ‘hospitalised’ and ‘general’ farmers’ samples respectively. The estimated percentage contribution for WTP due to health costs, lost earnings, mitigating expenditure, and disutility are 29, 50, 5 and 16 per cent respectively for hospitalised farmers, while they are 32, 55, 8 and 5 per cent respectively for ‘general’ farmers. It is also shown that given market imperfections for most agricultural inputs, farmers are overusing pesticides with the expectation of higher future returns. This has led to an increase in inefficiency in farming practices which is not understood by the farmers. Third, it is found that various groundwater depletion studies in the economics literature have provided misleading optimal water extraction quantity levels. This is due to a failure to incorporate all production costs in the relevant models. It is only by incorporating quality changes to quantity deterioration, that it is possible to derive socially optimal levels. Empirical results clearly show that the benefits per hectare per month considering both the avoidance costs of deepening agro-wells by five feet from the existing average, as well as the avoidance costs of maintaining the water salinity level at 1.8 (mmhos/Cm), is approximately Rs. 4,350 for farmers in the Anuradhapura district and Rs. 5,600 for farmers in the Matale district.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adequate blood supply and sufficient mechanical stability are necessary for timely fracture healing. Damage to vessels impairs blood supply; hindering the transport of oxygen which is an essential metabolite for cells involved in repair. The degree of mechanical stability determines the mechanical conditions in the healing tissues. The mechanical conditions can influence tissue differentiation and may also inhibit revascularization. Knowledge of the actual conditions in a healing fracture in vivo is extremely limited. This study aimed to quantify the pressure, oxygen tension and temperature in the external callus during the early phase of bone healing. Six Merino-mix sheep underwent a tibial osteotomy. The tibia was stabilized with a standard mono-lateral external fixator. A multi-parameter catheter was placed adjacent to the osteotomy gap on the medial aspect of the tibia. Measurements of oxygen tension and temperature were performed for ten days post-op. Measurements of pressure were performed during gait on days three and seven. The ground reaction force and the interfragmentary movements were measured simultaneously. The maximum pressure during gait increased (p=0.028) from three (41.3 [29.2-44.1] mm Hg) to seven days (71.8 [61.8-84.8] mm Hg). During the same interval, there was no change (p=0.92) in the peak ground reaction force or in the interfragmentary movement (compression: p=0.59 and axial rotation: p=0.11). Oxygen tension in the haematoma (74.1 mm Hg [68.6-78.5]) was initially high post-op and decreased steadily over the first five days. The temperature increased over the first four days before reaching a plateau at approximately 38.5 degrees C on day four. This study is the first to report pressure, oxygen tension and temperature in the early callus tissues. The magnitude of pressure increased even though weight bearing and IFM remained unchanged. Oxygen tensions were initially high in the haematoma and fell gradually with a low oxygen environment first established after four to five days. This study illustrates that in bone healing the local environment for cells may not be considered constant with regard to oxygen tension, pressure and temperature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bone loss may result from remodelling initiated by implant stress protection. Quantifying remodelling requires bone density distributions which can be obtained from computed tomography scans. Pre-operative scans of large animals however are rarely possible. This study aimed to determine if the contra-lateral bone is a suitable control for the purpose of quantifying bone remodelling. CT scans of 8 pairs of ovine tibia were used to determine the likeness of left and right bones. The deviation between the outer surfaces of the bone pairs was used to quantify geometric similarity. The density differences were determined by dividing the bones into discrete volumes along the shaft of the tibia. Density differences were also determined for fractured and contra-lateral bone pairs to determine the magnitude of implant related remodelling. Left and right ovine tibiae were found to have a high degree of similarity with differences of less than 1.0 mm in the outer surface deviation and density difference of less than 5% in over 90% of the shaft region. The density differences (10–40%) as a result of implant related bone remodelling were greater than left-right differences. Therefore, for the purpose of quantifying bone remodelling in sheep, the contra-lateral tibia may be considered an alternative to a pre-operative control.