385 resultados para Three-component Magma Mixing
Resumo:
This paper presents a novel evolutionary computation approach to three-dimensional path planning for unmanned aerial vehicles (UAVs) with tactical and kinematic constraints. A genetic algorithm (GA) is modified and extended for path planning. Two GAs are seeded at the initial and final positions with a common objective to minimise their distance apart under given UAV constraints. This is accomplished by the synchronous optimisation of subsequent control vectors. The proposed evolutionary computation approach is called synchronous genetic algorithm (SGA). The sequence of control vectors generated by the SGA constitutes to a near-optimal path plan. The resulting path plan exhibits no discontinuity when transitioning from curve to straight trajectories. Experiments and results show that the paths generated by the SGA are within 2% of the optimal solution. Such a path planner when implemented on a hardware accelerator, such as field programmable gate array chips, can be used in the UAV as on-board replanner, as well as in ground station systems for assisting in high precision planning and modelling of mission scenarios.
Resumo:
Twenty first century learners operate in organic, immersive environments. A pedagogy of student-centred learning is not a recipe for rooms. A contemporary learning environment is like a landscape that grows, morphs, and responds to the pressures of the context and micro-culture. There is no single adaptable solution, nor a suite of off-the-shelf answers; propositions must be customisable and infinitely variable. They must be indeterminate and changeable; based on the creation of learning places, not restrictive or constraining spaces. A sustainable solution will be un-fixed, responsive to the life cycle of the components and materials, able to be manipulated by the users; it will create and construct its own history. Learning occurs as formal education with situational knowledge structures, but also as informal learning, active learning, blended learning social learning, incidental learning, and unintended learning. These are not spatial concepts but socio-cultural patterns of discovery. Individual learning requirements must run free and need to be accommodated as the learner sees fit. The spatial solution must accommodate and enable a full array of learning situations. It is a system not an object. Three major components: 1. The determinate landscape: in-situ concrete 'plate' that is permanent. It predates the other components of the system and remains as a remnant/imprint/fossil after the other components of the system have been relocated. It is a functional learning landscape in its own right; enabling a variety of experiences and activities. 2. The indeterminate landscape: a kit of pre-fabricated 2-D panels assembled in a unique manner at each site to suit the client and context. Manufactured to the principles of design-for-disassembly. A symbiotic barnacle like system that attaches itself to the existing infrastructure through the determinate landscape which acts as a fast growth rhizome. A carapace of protective panels, infinitely variable to create enclosed, semi-enclosed, and open learning places. 3. The stations: pre-fabricated packages of highly-serviced space connected through the determinate landscape. Four main types of stations; wet-room learning centres, dry-room learning centres, ablutions, and low-impact building services. Entirely customised at the factory and delivered to site. The stations can be retro-fitted to suit a new context during relocation. Principles of design for disassembly: material principles • use recycled and recyclable materials • minimise the number of types of materials • no toxic materials • use lightweight materials • avoid secondary finishes • provide identification of material types component principles • minimise/standardise the number of types of components • use mechanical not chemical connections • design for use of common tools and equipment • provide easy access to all components • make component size to suite means of handling • provide built in means of handling • design to realistic tolerances • use a minimum number of connectors and a minimum number of types system principles • design for durability and repeated use • use prefabrication and mass production • provide spare components on site • sustain all assembly and material information
Resumo:
This report summarises the participatory action research (PAR) undertaken as part of the Homelessness Community Action Planning (HCAP) project implemented across seven regions in Queensland in 2011 and 2012. The HCAP is a component of the Queensland strategy for the National Partnership Agreement on Homelessness, and is funded for three years (2010-2013). The report identifies and analyses factors which facilitated or constrained the development of Government- NGO partnerships at regional and state levels in HCAP. The study supports the view that the HCAP partnership between the Queensland Government and the Community Services Sector is working and likely to be productive.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Infrastructure forms a vital component in supporting today’s way of life and has a significant role or impact on economic, environmental and social outcomes of the region around it. The design, construction and operation of such assets are a multi-billion dollar industry in Australia alone. Another issue that will play a major role in our way life is that of climate change and the greater concept of sustainability. With limited resources and a changing natural world it is necessary for infrastructure to be developed and maintained in a manner that is sustainable. In order to achieve infrastructure sustainability in operations it is necessary for there to be: a sustainability assessment scheme that provides a scientifically sound and realistic approach to measuring an assets level of sustainability; and, systems and tools to support the making of decisions that result in sustainable outcomes by providing feedback in a timely manner. Having these in place will then help drive the consideration of sustainability during the decision making process for infrastructure operations and maintenance. In this paper we provide two main contributions; a comparison and review of sustainability assessment schemes for infrastructure and their suitability for use in the operations phase; and, a review of decision support systems/tools in the area of infrastructure sustainability in operations. For this paper, sustainability covers not just the environment, but also finance/economic and societal/community aspects as well. This is often referred to as the Triple Bottom Line and forms one of the three dimensions of corporate sustainability [Stapledon, 2004].
Resumo:
Parabolic Trough Concentrators (PTC) are the most proven solar collectors for solar thermal power plants, and are suitable for concentrating photovoltaic (CPV) applications. PV cells are sensitive to spatial uniformity of incident light and the cell operating temperature. This requires the design of CPV-PTCs to be optimised both optically and thermally. Optical modelling can be performed using Monte Carlo Ray Tracing (MCRT), with conjugate heat transfer (CHT) modelling using the computational fluid dynamics (CFD) to analyse the overall designs. This paper develops and evaluates a CHT simulation for a concentrating solar thermal PTC collector. It uses the ray tracing work by Cheng et al. (2010) and thermal performance data for LS-2 parabolic trough used in the SEGS III-VII plants from Dudley et al. (1994). This is a preliminary step to developing models to compare heat transfer performances of faceted absorbers for concentrating photovoltaic (CPV) applications. Reasonable agreement between the simulation results and the experimental data confirms the reliability of the numerical model. The model explores different physical issues as well as computational issues for this particular kind of system modeling. The physical issues include the resultant non-uniformity of the boundary heat flux profile and the temperature profile around the tube, and uneven heating of the HTF. The numerical issues include, most importantly, the design of the computational domain/s, and the solution techniques of the turbulence quantities and the near-wall physics. This simulation confirmed that optical simulation and the computational CHT simulation of the collector can be accomplished independently.
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
Understanding the link between tectonic-driven extensional faulting and volcanism is crucial from a hazard perspective in active volcanic environments, while ancient volcanic successions provide records on how volcanic eruption styles, compositions, magnitudes and frequencies can change in response to extension timing, distribution and intensity. Significantly, incorrect tectonic interpretations can be made when the spatial-temporal-compositional trends of, and source contributions to magmatism are not properly considered. This study draws on intimate relationships of volcanism and extension preserved in the Sierra Madre Occidental (SMO) and Gulf of California (GoC) regions of western Mexico. Here, a major Oligocene rhyolitic ignimbrite “flare-up” (>300,000 km3) switched to a dominantly bimodal and mixed effusive-explosive volcanic phase in the Early Miocene (~100,000 km3), associated with distributed extension and opening of numerous grabens. Rhyolitic dome fields were emplaced along graben edges and at intersections of cross-graben and graben-parallel structures during early stages of graben development. Concomitant with this change in rhyolite eruption style was a change in crustal source as revealed by zircon chronochemistry with rapid rates of rhyolite magma generation due to remelting of mid- to upper crustal, highly differentiated igneous rocks emplaced during earlier SMO magmatism. Extension became more focused ~18 Ma resulting in volcanic activity being localised along the site of GoC opening. This localised volcanism (known as the Comondú “arc”) was dominantly effusive and andesite-dacite in composition. This compositional change resulted from increased mixing of basaltic and rhyolitic magmas rather than fluid flux melting of the mantle wedge above the subducting Guadalupe Plate. A poor understanding of space-time relationships of volcanism and extension has thus led to incorrect past tectonic interpretations of Comondú-age volcanism.
Resumo:
Large Igneous Provinces are exceptional intraplate igneous events throughout Earth’s history. Their significance and potential global impact is related to the total volume of magma intruded and released during these geologically brief events (peak eruptions are often within 1-5 Myrs duration) where millions to tens of millions of cubic kilometers of magma are produced. In some cases, at least 1% of the Earth’s surface has been directly covered in volcanic rock, being equivalent to the size of small continents with comparable crustal thicknesses. Large Igneous Provinces are thus important, albeit episodic episodes of new crust addition. However, most magmatism is basaltic so that contributions to crustal growth will not always be picked up in zircon geochronology studies that better trace major episodes of extension-related silicic magmatism and the silicic Large Igneous Provinces. Much headway has been made on our understanding of these anomalous igneous events over the last 25 years, driving many new ideas and models. This includes their: 1) global spatial and temporal distribution, with a long-term average of one event approximately every 20 Myrs, but a clear clustering of events at times of supercontinent break-up – Large Igneous Provinces are thus an integral part of the Wilson cycle and are becoming an increasingly important tool in reconnecting dispersed continental fragments; 2) compositional diversity that in part reflects their crustal setting of ocean basins, and continental interiors and margins where in the latter setting, LIP magmatism can be silicicdominant; 3) mineral and energy resources with major PGE and precious metal resources being hosted in these provinces, as well as magmatism impacting on the hydrocarbon potential of volcanic basins and rifted margins through enhancing source rock maturation, providing fluid migration pathways, and trap formation; 4) biospheric, hydrospheric and atmospheric impacts, with Large Igneous Provinces now widely regarded as a key trigger mechanism for mass extinctions, although the exact kill mechanism(s) are still being resolved; 5) role in mantle geodynamics and thermal evolution of the Earth, by potentially recording the transport of material from the lower mantle or core-mantle boundary to the Earth's surface and being a fundamental component in whole mantle convection models; and 6) recognition on the inner planets where the lack of plate tectonics and erosional processes and planetary antiquity means that the very earliest record of LIP events during planetary evolution may be better preserved than on Earth.
Resumo:
Understanding the link between tectonic-driven extensional faulting and volcanism is crucial from a hazard perspective in active volcanic environments, while ancient volcanic successions provide records on how volcanic eruption styles, compositions, magnitudes and frequencies can change in response to extension timing, distribution and intensity. This study draws on intimate relationships of volcanism and extension preserved in the Sierra Madre Occidental (SMO) and Gulf of California (GoC) regions of western Mexico. Here, a major Oligocene rhyolitic ignimbrite “flare-up” (>300,000 km3) switched to a dominantly bimodal and mixed effusive-explosive volcanic phase in the Early Miocene (~100,000 km3), associated with distributed extension and opening of numerous grabens. Rhyolitic dome fields were emplaced along graben edges and at intersections of cross-graben and graben-parallel structures during early stages of graben development. Concomitant with this change in rhyolite eruption style was a change in crustal source as revealed by zircon chronochemistry with rapid rates of rhyolite magma generation due to remelting of mid- to upper crustal, highly differentiated igneous rocks emplaced during earlier SMO magmatism. Extension became more focused ~18 Ma resulting in volcanic activity being localised along the site of GoC opening. This localised volcanism (known as the Comondú “arc”) was dominantly effusive and andesite-dacite in composition. This compositional change resulted from increased mixing of basaltic and rhyolitic magmas rather than fluid flux melting of the mantle wedge above the subducting Guadalupe Plate. A poor understanding of space-time relationships of volcanism and extension has thus led to incorrect past tectonic interpretations of Comondú-age volcanism.
Resumo:
California is home to multiple queer community archives created by community members outside of government, academic, and public archives. These archives are maintained by the communities and are important spaces not only for the preservation of records, but also as safe spaces to study, gather, and learn about the communities’ histories. This article describes the histories of three such queer community archives (Gay, Lesbian, Bisexual, Transgender Historical Society Lavender Library, Archives, and Cultural Exchange of Sacramento, Inc.; and ONE National Gay & Lesbian Archives) in order to discuss the role of activism in the community archives and implications for re-examining the role of activism to incorporate communities into the heart of archival professional work. By understanding the impetus for creating and maintaining queer community archives, archivists can use this knowledge to foster more reflective practices to be more inclusive in their archival practices through outreach, collaboration, and descriptive practices. This article extends our knowledge of community archives and provides evidence for the need to include communities in archival professional practice.
Resumo:
A novel multiple regression method (RM) is developed to predict identity-by-descent probabilities at a locus L (IBDL), among individuals without pedigree, given information on surrounding markers and population history. These IBDL probabilities are a function of the increase in linkage disequilibrium (LD) generated by drift in a homogeneous population over generations. Three parameters are sufficient to describe population history: effective population size (Ne), number of generations since foundation (T), and marker allele frequencies among founders (p). IBD L are used in a simulation study to map a quantitative trait locus (QTL) via variance component estimation. RM is compared to a coalescent method (CM) in terms of power and robustness of QTL detection. Differences between RM and CM are small but significant. For example, RM is more powerful than CM in dioecious populations, but not in monoecious populations. Moreover, RM is more robust than CM when marker phases are unknown or when there is complete LD among founders or Ne is wrong, and less robust when p is wrong. CM utilises all marker haplotype information, whereas RM utilises information contained in each individual marker and all possible marker pairs but not in higher order interactions. RM consists of a family of models encompassing four different population structures, and two ways of using marker information, which contrasts with the single model that must cater for all possible evolutionary scenarios in CM.
Resumo:
Particulate matter is common in our environment and has been linked to human health problems particularly in the ultrafine size range. A range of chemical species have been associated with particulate matter and of special concern are the hazardous chemicals that can accentuate health problems. If the sources of such particles can be identified then strategies can be developed for the reduction of air pollution and consequently, the improvement of the quality of life. In this investigation, particle number size distribution data and the concentrations of chemical species were obtained at two sites in Brisbane, Australia. Source apportionment was used to determine the sources (or factors) responsible for the particle size distribution data. The apportionment was performed by Positive Matrix Factorisation (PMF) and Principal Component Analysis/Absolute Principal Component Scores (PCA/APCS), and the results were compared with information from the gaseous chemical composition analysis. Although PCA/APCS resolved more sources, the results of the PMF analysis appear to be more reliable. Six common sources identified by both methods include: traffic 1, traffic 2, local traffic, biomass burning, and two unassigned factors. Thus motor vehicle related activities had the most impact on the data with the average contribution from nearly all sources to the measured concentrations higher during peak traffic hours and weekdays. Further analyses incorporated the meteorological measurements into the PMF results to determine the direction of the sources relative to the measurement sites, and this indicated that traffic on the nearby road and intersection was responsible for most of the factors. The described methodology which utilised a combination of three types of data related to particulate matter to determine the sources could assist future development of particle emission control and reduction strategies.
Resumo:
Video-based training combined with flotation tank recovery may provide an additional stimulus for improving shooting in basketball. A pre-post controlled trial was conducted to assess the effectiveness of a 3 wk intervention combining video-based training and flotation tank recovery on three-point shooting performance in elite female basketball players. Players were assigned to an experimental (n=10) and control group (n=9). A 3 wk intervention consisted of 2 x 30 min float sessions a week which included 10 min of video-based training footage, followed by a 3 wk retention phase. A total of 100 three-point shots were taken from 5 designated positions on the court at each week to assess three-point shooting performance. There was no clear difference in the mean change in the number of successful three-point shots between the groups (-3%; ±18%, mean; ±90% confidence limits). Video-based training combined with flotation recovery had little effect on three-point shooting performance.
Resumo:
Undergraduate programs can play an important role in the development of individuals wanting professional employment within statutory child protection agencies: both the coursework and the work-integrated learning (WIL) components of degrees have a role in this process. This paper uses a collective case study methodology to examine the perceptions and experiences of first year practitioners within a specific statutory child protection agency in order to identify if they felt prepared for their current role. The sample of 20 participants came from a range of discipline backgrounds with just over half of the sample (55 per cent) completing a WIL placement as part of their undergraduate studies. The results indicate that while some participants were able to identify and articulate specific benefits from their undergraduate coursework studies all participants who had undertaken a WIL placement as part of their degree believed the WIL placement was beneficial for their current work.