582 resultados para Feasibility Studies.
Resumo:
This paper aims to present a preliminary benefit analysis for airborne GPS occultation technique for the Australian region. The simulation studies are based on current domestic commercial flights between major Australian airports. With the knowledge of GPS satellite ephemeris data, occultation events for for any particular flight can be determined. Preliminary analysis shows a high resolution occultation observations can be achieved with this approach, for instance, about 15 occultation events for a Perth-to-Sydney flight. The simulation result agrees to the results published by other researchers for a different region. Of course, occultation observation during off-peak hours might be affected due to the limited flight activities. --------- High resolution occultation observations obtainable from airborne GPS occultation system provides an opportunity to improve the current global numerical weather prediction (NWP) models and ultimately improves the accuracy in weather forecasting. More intensive research efforts and experimental demonstrations are required in order to demonstrate the technical feasibility of the airborne GPS technology.
Resumo:
Virtual 3D models of long bones are increasingly being used for implant design and research applications. The current gold standard for the acquisition of such data is Computed Tomography (CT) scanning. Due to radiation exposure, CT is generally limited to the imaging of clinical cases and cadaver specimens. Magnetic Resonance Imaging (MRI) does not involve ionising radiation and therefore can be used to image selected healthy human volunteers for research purposes. The feasibility of MRI as alternative to CT for the acquisition of morphological bone data of the lower extremity has been demonstrated in recent studies [1, 2]. Some of the current limitations of MRI are long scanning times and difficulties with image segmentation in certain anatomical regions due to poor contrast between bone and surrounding muscle tissues. Higher field strength scanners promise to offer faster imaging times or better image quality. In this study image quality at 1.5T is quantitatively compared to images acquired at 3T. --------- The femora of five human volunteers were scanned using 1.5T and 3T MRI scanners from the same manufacturer (Siemens) with similar imaging protocols. A 3D flash sequence was used with TE = 4.66 ms, flip angle = 15° and voxel size = 0.5 × 0.5 × 1 mm. PA-Matrix and body matrix coils were used to cover the lower limb and pelvis respectively. Signal to noise ratio (SNR) [3] and contrast to noise ratio (CNR) [3] of the axial images from the proximal, shaft and distal regions were used to assess the quality of images from the 1.5T and 3T scanners. The SNR was calculated for the muscle and bone-marrow in the axial images. The CNR was calculated for the muscle to cortex and cortex to bone marrow interfaces, respectively. --------- Preliminary results (one volunteer) show that the SNR of muscle for the shaft and distal regions was higher in 3T images (11.65 and 17.60) than 1.5T images (8.12 and 8.11). For the proximal region the SNR of muscles was higher in 1.5T images (7.52) than 3T images (6.78). The SNR of bone marrow was slightly higher in 1.5T images for both proximal and shaft regions, while it was lower in the distal region compared to 3T images. The CNR between muscle and bone of all three regions was higher in 3T images (4.14, 6.55 and 12.99) than in 1.5T images (2.49, 3.25 and 9.89). The CNR between bone-marrow and bone was slightly higher in 1.5T images (4.87, 12.89 and 10.07) compared to 3T images (3.74, 10.83 and 10.15). These results show that the 3T images generated higher contrast between bone and the muscle tissue than the 1.5T images. It is expected that this improvement of image contrast will significantly reduce the time required for the mainly manual segmentation of the MR images. Future work will focus on optimizing the 3T imaging protocol for reducing chemical shift and susceptibility artifacts.
Resumo:
Dynamic and controlled rate thermal analysis (CRTA) has been used to characterise alunites of formula [M(Al)3(SO4)2(OH)6 ] where M+ is the cations K+, Na+ or NH4+. Thermal decomposition occurs in a series of steps. (a) dehydration, (b) well defined dehydroxylation and (c) desulphation. CRTA offers a better resolution and a more detailed interpretation of water formation processes via approaching equilibrium conditions of decomposition through the elimination of the slow transfer of heat to the sample as a controlling parameter on the process of decomposition. Constant-rate decomposition processes of water formation reveal the subtle nature of dehydration and dehydroxylation.
Resumo:
The existence of any film genre depends on the effective operation of distribution networks. Contingencies of distribution play an important role in determining the content of individual texts and the characteristics of film genres; they enable new genres to emerge at the same time as they impose limits on generic change. This article sets out an alternative way of doing genre studies, based on an analysis of distributive circuits rather than film texts or generic categories. Our objective is to provide a conceptual framework that can account for the multiple ways in which distribution networks leave their traces on film texts and audience expectations, with specific reference to international horror networks, and to offer some preliminary suggestions as to how distribution analysis can be integrated into existing genre studies methodologies.
Resumo:
This presenation is part of the UDIA (Qld) Property Development Essentials program, which is a two-day introductory course designed for new entrants to the property industry. The course provides practical advice and direction for those looking to take the first steps into the development industry. This presentation identifies economic factors and their influence on land acquisitions, as well as providing an understanding the property development and business cycles and their impacts on acquisition strategies (long v. short term projects)
Resumo:
Mottramite mineral originated from Tsumeb Corporation Mine, Tsumeb, Otavi, Namibia, is used in the present work. The mineral contains of vanadium and copper to the extent of 22.73% and 16.84% by weight respectively as V2O5 and CuO. An EPR study of sample confirms the presence of Cu(II) with g = 2.2. Optical absorption spectrum of mottramite indicates that Cu(II) is present in rhombic environment. NIR results are due to water fundamentals.
Resumo:
This paper will explore how a general education can contribute successfully to vocational outcomes using both Participatory Action Research (PAR) and Program Theory methodology. The paper will focus on the development aspects of ‘marrying’ vocational and general education including engagement processes, student, teacher, institute and employer preparation and the pathway possibilities that emerge. Successful cases presented include the: Healthy Futures program (pathways into the Health and Allied industries); Accounting Pathways program (simultaneously studying a general Accounting subject and a Certificate III vocational qualification); and Sustainable Sciences initiative (development of a vocational qualification that focuses on the emerging renewable energy industry and is linked to school science programs). The case studies have been selected because they are unique in character and application and can be used as a basis for future program development in other settings or curriculum areas.
Resumo:
Digital communication has transformed literacy practices and assumed great importance in the functioning of workplace, recreational, and community contexts. This article reviews a decade of empirical work of the New Literacy Studies, identifying the shift toward research of digital literacy applications. The article engages with the central theoretical, methodological, and pragmatic challenges in the tradition of New Literacy Studies, while highlighting the distinctive trends in the digital strand. It identifies common patterns across new literacy practices through cross-comparisons of ethnographic research in digital media environments. It examines ways in which this research is taking into account power and pedagogy in normative contexts of literacy learning using the new media. Recommendations are given to strengthen the links between New Literacy Studies research and literacy curriculum, assessment, and accountability in the 21st century.
Resumo:
The 2010 Native American Indigenous Studies Conference was held at The Westin La Paloma Resort, Tucson, Arizona, USA from 20-22 May. The conference was scholarly and interdisciplinary and intended for Indigenous and non-Indigenous scholars who work in American Indian/ Native American/ First Nations/ Aboriginal/ Indigenous Studies. The 2010 gathering attracted 768 registrations from the USA, Canada, Hawaii, Mexico, New Zealand and Australia and other countries. This paper is a personal reflection and overview of the 2010 Conference.
Resumo:
A voglite mineral sample of Volrite Canyon #1 mine, Frey Point, White Canyon Mine District, San Juan County, Utah, USA is used in the present study. An EPR study on powdered sample confirms the presence of Mn(II) and Cu(II). Optical absorption spectral results are due to Cu(II) which is in distorted octahedron. NIR results are indicating the presence of water fundamentals.
Resumo:
The radiation chemistry and the grafting of a fluoropolymer, poly(tetrafluoroethylene-coperfluoropropyl vinyl ether) (PFA), was investigated with the aim of developing a highly stable grafted support for use in solid phase organic chemistry (SPOC). A radiation-induced grafting method was used whereby the PFA was exposed to ionizing radiation to form free radicals capable of initiating graft copolymerization of styrene. To fully investigate this process, both the radiation chemistry of PFA and the grafting of styrene to PFA were examined. Radiation alone was found to have a detrimental effect on PFA when irradiated at 303 K. This was evident from the loss in the mechanical properties due to chain scission reactions. This meant that when radiation was used for the grafting reactions, the total radiation dose needed to be kept as low as possible. The radicals produced when PFA was exposed to radiation were examined using electron spin resonance spectroscopy. Both main-chain (–CF2–C.F–CF2-) and end-chain (–CF2–C.F2) radicals were identified. The stability of the majority of the main-chain radicals when the polymer was heated above the glass transition temperature suggested that they were present mainly in the crystalline regions of the polymer, while the end-chain radicals were predominately located in the amorphous regions. The radical yield at 77 K was lower than the radical yield at 303 K suggesting that cage recombination at low temperatures inhibited free radicals from stabilizing. High-speed MAS 19F NMR was used to identify the non-volatile products after irradiation of PFA over a wide temperature range. The major products observed over the irradiation temperature 303 to 633 K included new saturated chain ends, short fluoromethyl side chains in both the amorphous and crystalline regions, and long branch points. The proportion of the radiolytic products shifted from mainly chain scission products at low irradiation temperatures to extensive branching at higher irradiation temperatures. Calculations of G values revealed that net crosslinking only occurred when PFA was irradiated in the melt. Minor products after irradiation at elevated temperatures included internal and terminal double bonds and CF3 groups adjacent to double bonds. The volatile products after irradiation at 303 K included tetrafluoromethane (CF4) and oxygen-containing species from loss of the perfluoropropyl ether side chains of PFA as identified by mass spectrometry and FTIR spectroscopy. The chemical changes induced by radiation exposure were accompanied by changes in the thermal properties of the polymer. Changes in the crystallinity and thermal stability of PFA after irradiation were examined using DSC and TGA techniques. The equilibrium melting temperature of untreated PFA was 599 K as determined using a method of extrapolation of the melting temperatures of imperfectly formed crystals. After low temperature irradiation, radiation- induced crystallization was prevalent due to scission of strained tie molecules, loss of perfluoropropyl ether side chains, and lowering of the molecular weight which promoted chain alignment and hence higher crystallinity. After irradiation at high temperatures, the presence of short and long branches hindered crystallization, lowering the overall crystallinity. The thermal stability of the PFA decreased with increasing radiation dose and temperature due to the introduction of defect groups. Styrene was graft copolymerized to PFA using -radiation as the initiation source with the aim of preparing a graft copolymer suitable as a support for SPOC. Various grafting conditions were studied, such as the total dose, dose rate, solvent effects and addition of nitroxides to create “living” graft chains. The effect of dose rate was examined when grafting styrene vapour to PFA using the simultaneous grafting method. The initial rate of grafting was found to be independent of the dose rate which implied that the reaction was diffusion controlled. When the styrene was dissolved in various solvents for the grafting reaction, the graft yield was strongly dependent of the type and concentration of the solvent used. The greatest graft yield was observed when the solvent swelled the grafted layers and the substrate. Microprobe Raman spectroscopy was used to map the penetration of the graft into the substrate. The grafted layer was found to contain both poly(styrene) (PS) and PFA and became thicker with increasing radiation dose and graft yield which showed that grafting began at the surface and progressively penetrated the substrate as the grafted layer was swollen. The molecular weight of the grafted PS was estimated by measuring the molecular weight of the non-covalently bonded homopolymer formed in the grafted layers using SEC. The molecular weight of the occluded homopolymer was an order of magnitude greater than the free homopolymer formed in the surrounding solution suggesting that the high viscosity in the grafted regions led to long PS grafts. When a nitroxide mediated free radical polymerization was used, grafting occurred within the substrate and not on the surface due to diffusion of styrene into the substrate at the high temperatures needed for the reaction to proceed. Loading tests were used to measure the capacity of the PS graft to be functionialized with aminomethyl groups then further derivatized. These loading tests showed that samples grafted in a solution of styrene and methanol had superior loading capacity over samples graft using other solvents due to the shallow penetration and hence better accessibility of the graft when methanol was used as a solvent.
Resumo:
Cloud computing is a latest new computing paradigm where applications, data and IT services are provided over the Internet. Cloud computing has become a main medium for Software as a Service (SaaS) providers to host their SaaS as it can provide the scalability a SaaS requires. The challenges in the composite SaaS placement process rely on several factors including the large size of the Cloud network, SaaS competing resource requirements, SaaS interactions between its components and SaaS interactions with its data components. However, existing applications’ placement methods in data centres are not concerned with the placement of the component’s data. In addition, a Cloud network is much larger than data center networks that have been discussed in existing studies. This paper proposes a penalty-based genetic algorithm (GA) to the composite SaaS placement problem in the Cloud. We believe this is the first attempt to the SaaS placement with its data in Cloud provider’s servers. Experimental results demonstrate the feasibility and the scalability of the GA.
Resumo:
This paper presents the details of an experimental study on the shear behaviour and strength of a recently developed, cold-formed steel hollow flange channel beam known as LiteSteel Beam (LSB). The new LSB sections with rectangular hollow flanges are produced using a patented manufacturing process involving simultaneous cold-forming and dual electric resistance welding. They are commonly used as flexural members in buildings. However, no research has been undertaken on the shear behaviour of LSBs. Therefore a detailed experimental study involving 36 shear tests was undertaken to investigate the shear behaviour of 10 different LSB sections. Simply supported test specimens of LSBs with aspect ratios of 1.0 and 1.5 were loaded at midspan until failure using both single and back to back LSB arrangements. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. Comparison of experimental results with corresponding predictions from the current Australian and North American cold-formed steel design rules showed that the current design rules are very conservative for the shear design of LSBs. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Appropriate improvements have been proposed for the shear strength of LSBs based on the design equations in the North American Specification. This paper presents the details of this experimental study and the results. When reduced height web side plates or only one web side plate was used, the shear capacity of LSB was reduced. Details of these tests and the results are also presented in this paper. Keywords: LiteSteel beam, Shear strength, Shear tests, Cold-formed steel structures, Direct strength method, Slender web, Hollow flanges.
Resumo:
Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.
Resumo:
In this chapter, we are particularly concerned with making visible the general principles underlying the transmission of Social Studies curriculum knowledge, and considering it in light of a high-stakes mandated national assessment task. Specifically, we draw on Bernstein’s theoretical concept of pedagogic models as a tool for analysing orientations to teaching and learning. We introduce a case in point from the Australian context: one state Social Studies curriculum vis-a-vis one part of the Year Three national assessment measure for reading. We use our findings to consider the implications for the disciplinary knowledge of Social Studies in the communities in which we are undertaking our respective Australian Research Council Linkage project work (Glasswell et al.; Woods et al.). We propose that Social Studies disciplinary knowledge is being constituted, in part, through power struggles between different agencies responsible for the production and relay of official forms of state curriculum and national literacy assessment. This is particularly the case when assessment instruments are used to compare and contrast school results in highly visible web based league tables (see, for example, http://myschoolaustralia.ning.com/).