306 resultados para One-meson-exchange model

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standardised testing does not recognise the creativity and skills of marginalised youth. This paper presents the development of an innovative approach to assessment designed for the re-engagement of at risk youth who have left formal schooling and are now in an alternative education institution. An electronic portfolio system (EPS) has been developed to capture, record and build on the broad range of students’ cultural and social capital. The assessment as a field of exchange model draws on categories from sociological fields of capital and reconceptualises an eportfolio and social networking hybrid system as a sociocultural zone of learning and development. The EPS, and assessment for learning more generally, are conceptualised as social fields for the exchange of capital (Bourdieu 1977, 1990). The research is underpinned by a sociocultural theoretical perspective that focuses on how students and teachers at the Flexible Learning Centre (FLC) develop and learn, within the zone of proximal development (Vygotsky, 1978). The EPS is seen to be highly effective in the engagement and social interaction between students, teachers and institutions. It is argued throughout this paper that the EPS provides a structurally identifiable space, an arena of social activity, or a field of exchange. The students, teachers and the FLC within this field are producing cultural capital exchanges. The term efield (exchange field) has been coined to refer to this constructed abstract space. Initial results from the trial show a general tendency towards engagement with the EPS and potential for the attainment of socially valued cultural capital in the form of school credentials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was conducted within the context of a flexible education institution where conventional educational assessment practices and tests fail to recognise and assess the creativity and cultural capital of a cohort of marginalised young people. A new assessment model which included an electronic-portfolio-social-networking system (EPS) was developed and trialled to identify and exhibit evidence of students' learning. The study aimed to discern unique forms of cultural capital (Bourdieu, 1986) possessed by students who attend the Edmund Rice Education Australia Flexible Learning Centre Network (EREAFLCN). The EPS was trialled at the case study schools in an intervention and developed a space where students could make evident culturally specific forms of capital and funds of knowledge (Gonzalez, Moll, & Amanti, 2005). These resources were evaluated, modified and developed through dialogic processes utilising assessment for learning approaches (Qualifications and Curriculum Development Agency, 2009) in online and classroom settings. Students, peers and staff engaged in the recognition, judgement, revision and evaluation of students' cultural capital in a subfield of exchange (Bourdieu, 1990). The study developed the theory of assessment for learning as a field of exchange incorporating an online system as a teaching and assessment model. The term efield has been coined to describe this particular capital exchange model. A quasi-ethnographic approach was used to develop a collective case study (Stake, 1995). This case study involved an in-depth exploration of five students' forms of cultural capital and the ways in which this capital could be assessed and exchanged using the efield model. A comparative analysis of the five cases was conducted to identify the emergent issues of students' recognisable cultural capital resources and the processes of exchange that can be facilitated to acquire legitimate credentials for these students in the Australian field of education. The participants in the study were young people at two EREAFLC schools aged between 12 and 18 years. Data was collected through interviews, observations and examination of documents made available by the EREAFLCN. The data was coded and analysed using a theoretical framework based on Bourdieu's analytical tools and a sociocultural psychology theoretical perspective. Findings suggest that processes based on dialogic relationships can identify and recognise students' forms of cultural capital that are frequently misrecognised in mainstream school environments. The theory of assessment for learning as a field of exchange was developed into praxis and integrated in an intervention. The efield model was found to be an effective sociocultural tool in converting and exchanging students' capital resources for legitimated cultural and symbolic capital in the field of education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bone morphogen proteins (BMPs) are distributed along a dorsal-ventral (DV) gradient in many developing embryos. The spatial distribution of this signaling ligand is critical for correct DV axis specification. In various species, BMP expression is spatially localized, and BMP gradient formation relies on BMP transport, which in turn requires interactions with the extracellular proteins Short gastrulation/Chordin (Chd) and Twisted gastrulation (Tsg). These binding interactions promote BMP movement and concomitantly inhibit BMP signaling. The protease Tolloid (Tld) cleaves Chd, which releases BMP from the complex and permits it to bind the BMP receptor and signal. In sea urchin embryos, BMP is produced in the ventral ectoderm, but signals in the dorsal ectoderm. The transport of BMP from the ventral ectoderm to the dorsal ectoderm in sea urchin embryos is not understood. Therefore, using information from a series of experiments, we adapt the mathematical model of Mizutani et al. (2005) and embed it as the reaction part of a one-dimensional reaction–diffusion model. We use it to study aspects of this transport process in sea urchin embryos. We demonstrate that the receptor-bound BMP concentration exhibits dorsally centered peaks of the same type as those observed experimentally when the ternary transport complex (Chd-Tsg-BMP) forms relatively quickly and BMP receptor binding is relatively slow. Similarly, dorsally centered peaks are created when the diffusivities of BMP, Chd, and Chd-Tsg are relatively low and that of Chd-Tsg-BMP is relatively high, and the model dynamics also suggest that Tld is a principal regulator of the system. At the end of this paper, we briefly compare the observed dynamics in the sea urchin model to a version that applies to the fly embryo, and we find that the same conditions can account for BMP transport in the two types of embryos only if Tld levels are reduced in sea urchin compared to fly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, system integration has grown in popularity as it allows organisations to streamline business processes. Traditionally, system integration has been conducted through point-to-point solutions – as a new integration scenario requirement arises, a custom solution is built between the relevant systems. Bus-based solutions are now preferred, whereby all systems communicate via an intermediary system such as an enterprise service bus, using a common data exchange model. This research investigates the use of a common data exchange model based on open standards, specifically MIMOSA OSA-EAI, for asset management system integration. A case study is conducted that involves the integration of processes between a SCADA, maintenance decision support and work management system. A diverse number of software platforms are employed in developing the final solution, all tied together through MIMOSA OSA-EAI-based XML web services. The lessons learned from the exercise are presented throughout the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hydrogel polymers are used for the manufacture of soft (or disposable) contact lenses worldwide today, but have a tendency to dehydrate on the eye. In vitro methods that can probe the potential for a given hydrogel polymer to dehydrate in vivo are much sought after. Nuclear magnetic resonance (NMR) has been shown to be effective in characterising water mobility and binding in similar systems (Barbieri, Quaglia et al., 1998, Larsen, Huff et al., 1990, Peschier, Bouwstra et al., 1993), predominantly through measurement of the spin-lattice relaxation time (T1), the spinspin relaxation time (T2) and the water diffusion coefficient (D). The aim of this work was to use NMR to quantify the molecular behaviour of water in a series of commercially available contact lens hydrogels, and relate these measurements to the binding and mobility of the water, and ultimately the potential for the hydrogel to dehydrate. As a preliminary study, in vitro evaporation rates were measured for a set of commercial contact lens hydrogels. Following this, comprehensive measurement of the temperature and water content dependencies of T1, T2 and D was performed for a series of commercial hydrogels that spanned the spectrum of equilibrium water content (EWC) and common compositions of contact lenses that are manufactured today. To quantify material differences, the data were then modelled based on theory that had been used for similar systems in the literature (Walker, Balmer et al., 1989, Hills, Takacs et al., 1989). The differences were related to differences in water binding and mobility. The evaporative results suggested that the EWC of the material was important in determining a material's potential to dehydrate in this way. Similarly, the NMR water self-diffusion coefficient was also found to be largely (if not wholly) determined by the WC. A specific binding model confirmed that the we was the dominant factor in determining the diffusive behaviour, but also suggested that subtle differences existed between the materials used, based on their equilibrium we (EWC). However, an alternative modified free volume model suggested that only the current water content of the material was important in determining the diffusive behaviour, and not the equilibrium water content. It was shown that T2 relaxation was dominated by chemical exchange between water and exchangeable polymer protons for materials that contained exchangeable polymer protons. The data was analysed using a proton exchange model, and the results were again reasonably correlated with EWC. Specifically, it was found that the average water mobility increased with increasing EWe approaching that of free water. The T1 relaxation was also shown to be reasonably well described by the same model. The main conclusion that can be drawn from this work is that the hydrogel EWe is an important parameter, which largely determines the behaviour of water in the gel. Higher EWe results in a hydrogel with water that behaves more like bulk water on average, or is less strongly 'bound' on average, compared with a lower EWe material. Based on the set of materials used, significant differences due to composition (for materials of the same or similar water content) could not be found. Similar studies could be used in the future to highlight hydrogels that deviate significantly from this 'average' behaviour, and may therefore have the least/greatest potential to dehydrate on the eye.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growth of solid tumours beyond a critical size is dependent upon angiogenesis, the formation of new blood vessels from an existing vasculature. Tumours may remain dormant at microscopic sizes for some years before switching to a mode in which growth of a supportive vasculature is initiated. The new blood vessels supply nutrients, oxygen, and access to routes by which tumour cells may travel to other sites within the host (metastasize). In recent decades an abundance of biological research has focused on tumour-induced angiogenesis in the hope that treatments targeted at the vasculature may result in a stabilisation or regression of the disease: a tantalizing prospect. The complex and fascinating process of angiogenesis has also attracted the interest of researchers in the field of mathematical biology, a discipline that is, for mathematics, relatively new. The challenge in mathematical biology is to produce a model that captures the essential elements and critical dependencies of a biological system. Such a model may ultimately be used as a predictive tool. In this thesis we examine a number of aspects of tumour-induced angiogenesis, focusing on growth of the neovasculature external to the tumour. Firstly we present a one-dimensional continuum model of tumour-induced angiogenesis in which elements of the immune system or other tumour-cytotoxins are delivered via the newly formed vessels. This model, based on observations from experiments by Judah Folkman et al., is able to show regression of the tumour for some parameter regimes. The modelling highlights a number of interesting aspects of the process that may be characterised further in the laboratory. The next model we present examines the initiation positions of blood vessel sprouts on an existing vessel, in a two-dimensional domain. This model hypothesises that a simple feedback inhibition mechanism may be used to describe the spacing of these sprouts with the inhibitor being produced by breakdown of the existing vessel's basement membrane. Finally, we have developed a stochastic model of blood vessel growth and anastomosis in three dimensions. The model has been implemented in C++, includes an openGL interface, and uses a novel algorithm for calculating proximity of the line segments representing a growing vessel. This choice of programming language and graphics interface allows for near-simultaneous calculation and visualisation of blood vessel networks using a contemporary personal computer. In addition the visualised results may be transformed interactively, and drop-down menus facilitate changes in the parameter values. Visualisation of results is of vital importance in the communication of mathematical information to a wide audience, and we aim to incorporate this philosophy in the thesis. As biological research further uncovers the intriguing processes involved in tumourinduced angiogenesis, we conclude with a comment from mathematical biologist Jim Murray, Mathematical biology is : : : the most exciting modern application of mathematics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper investigates theoretically and numerically local heating effects in plasmon nanofocusing structures with a particular focus on the sharp free-standing metal wedges. The developed model separates plasmon propagation in the wedge from the resultant heating effects. Therefore, this model is only applicable where the temperature increments in a nanofocusing structure are sufficiently small not to result in significant variations of the metal permittivity in the wedge. The problem is reduced to a one-dimensional heating model with a distributed heat source resulting from plasmon dissipation in the metal wedge. A simple heat conduction equation governing the local heating effects in a nanofocusing structure is derived and solved numerically for plasmonic pulses of different lengths and reasonable energies. Both the possibility of achieving substantial local temperature increments in the wedge (with a significant self-influence of the heating plasmonic pulses), and the possibility of relatively weak heating (to ensure the validity of the previously developed nanofocusing theory) are demonstrated and discussed, including the future applications of the obtained results. Applicability conditions for the developed model are also derived and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Transport Layer Security (TLS) protocol is the most widely used security protocol on the Internet. It supports negotiation of a wide variety of cryptographic primitives through different cipher suites, various modes of client authentication, and additional features such as renegotiation. Despite its widespread use, only recently has the full TLS protocol been proven secure, and only the core cryptographic protocol with no additional features. These additional features have been the cause of several practical attacks on TLS. In 2009, Ray and Dispensa demonstrated how TLS renegotiation allows an attacker to splice together its own session with that of a victim, resulting in a man-in-the-middle attack on TLS-reliant applications such as HTTP. TLS was subsequently patched with two defence mechanisms for protection against this attack. We present the first formal treatment of renegotiation in secure channel establishment protocols. We add optional renegotiation to the authenticated and confidential channel establishment model of Jager et al., an adaptation of the Bellare--Rogaway authenticated key exchange model. We describe the attack of Ray and Dispensa on TLS within our model. We show generically that the proposed fixes for TLS offer good protection against renegotiation attacks, and give a simple new countermeasure that provides renegotiation security for TLS even in the face of stronger adversaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of immobilised TiO2 for the purification of polluted water streams introduces the necessity to evaluate the effect of mechanisms such as the transport of pollutants from the bulk of the liquid to the catalyst surface and the transport phenomena inside the porous film. Experimental results of the effects of film thickness on the observed reaction rate for both liquid-side and support-side illumination are here compared with the predictions of a one-dimensional mathematical model of the porous photocatalytic slab. Good agreement was observed between the experimentally obtained photodegradation of phenol and its by-products, and the corresponding model predictions. The results have confirmed that an optimal catalyst thickness exists and, for the films employed here, is 5 μm. Furthermore, the modelling results have highlighted the fact that porosity, together with the intrinsic reaction kinetics are the parameters controlling the photocatalytic activity of the film. The former by influencing transport phenomena and light absorption characteristics, the latter by naturally dictating the rate of reaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Perflurooctanoic acid (PFOA) and perfluorooctane sulfonic acid (PFOS) have been used for a variety of applications including fluoropolymer processing, fire-fighting foams and surface treatments since the 1950s. Both PFOS and PFOA are polyfluoroalkyl chemicals (PFCs), man-made compounds that are persistent in the environment and humans; some PFCs have shown adverse effects in laboratory animals. Here we describe the application of a simple one compartment pharmacokinetic model to estimate total intakes of PFOA and PFOS for the general population of urban areas on the east coast of Australia. Key parameters for this model include the elimination rate constants and the volume of distribution within the body. A volume of distribution was calibrated for PFOA to a value of 170ml/kgbw using data from two communities in the United States where the residents' serum concentrations could be assumed to result primarily from a known and characterized source, drinking water contaminated with PFOA by a single fluoropolymer manufacturing facility. For PFOS, a value of 230ml/kgbw was used, based on adjustment of the PFOA value. Applying measured Australian serum data to the model gave mean+/-standard deviation intake estimates of PFOA of 1.6+/-0.3ng/kgbw/day for males and females >12years of age combined based on samples collected in 2002-2003 and 1.3+/-0.2ng/kg bw/day based on samples collected in 2006-2007. Mean intakes of PFOS were 2.7+/-0.5ng/kgbw/day for males and females >12years of age combined based on samples collected in 2002-2003, and 2.4+/-0.5ng/kgbw/day for the 2006-2007 samples. ANOVA analysis was run for PFOA intake and demonstrated significant differences by age group (p=0.03), sex (p=0.001) and date of collection (p<0.001). Estimated intake rates were highest in those aged >60years, higher in males compared to females, and higher in 2002-2003 compared to 2006-2007. The same results were seen for PFOS intake with significant differences by age group (p<0.001), sex (p=0.001) and date of collection (p=0.016).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose The purpose of this review is to address important methodological issues related to conducting accelerometer-based assessments of physical activity in free-living individuals. Methods We review the extant scientific literature for empirical information related to the following issues: product selection, number of accelerometers needed, placement of accelerometers, epoch length, and days of monitoring required to estimate habitual physical activity. We also discuss the various options related to distributing and collecting monitors and strategies to enhance compliance with the monitoring protocol. Results No definitive evidence exists currently to indicate that one make and model of accelerometer is more valid and reliable than another. Selection of accelerometer therefore remains primarily an issue of practicality, technical support, and comparability with other studies. Studies employing multiple accelerometers to estimate energy expenditure report only marginal improvements in explanatory power. Accelerometers are best placed on hip or the lower back. Although the issue of epoch length has not been studied in adults, the use of count cut points based on 1-min time intervals maybe inappropriate in children and may result in underestimation of physical activity. Among adults, 3–5 d of monitoring is required to reliably estimate habitual physical activity. Among children and adolescents, the number of monitoring days required ranges from 4 to 9 d, making it difficult to draw a definitive conclusion for this population. Face-to-face distribution and collection of accelerometers is probably the best option in field-based research, but delivery and return by express carrier or registered mail is a viable option. Conclusion Accelerometer-based activity assessments requires careful planning and the use of appropriate strategies to increase compliance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiscale numerical modeling of the species balance and transport in the ionized gas phase and on the nanostructured solid surface complemented by the heat exchange model is used to demonstrate the possibility of minimizing the Gibbs-Thompson effect in low-temperature, low-pressure chemically active plasma-assisted growth of uniform arrays of very thin Si nanowires, impossible otherwise. It is shown that plasma-specific effects drastically shorten and decrease the dispersion of the incubation times for the nucleation of nanowires on non-uniform Au catalyst nanoparticle arrays. The fast nucleation makes it possible to avoid a common problem of small catalyst nanoparticle burying by amorphous silicon. These results explain a multitude of experimental observations on chemically active plasma-assisted Si nanowire growth and can be used for the synthesis of a range of inorganic nanowires for environmental, biomedical, energy conversion, and optoelectronic applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of bat detectors to monitor bat activity is common. Although several papers have compared the performance of different brands, none have dealt with the effect of different habitats nor have they compared narrow- and broad-band detectors. In this study the performance of four brands of ultrasonic bat detector, including three narrowband and one broad-band model, were compared for their ability to detect a 40 kHz continuous sound of variable amplitude along 100 metre transects. Transects were laid out in two contrasting bat habitat types: grassland and forest. Results showed that the different brands of detector differed in their ability to detect the source in terms of maximum and minimum detectable distance of the source. The rate of sound degradation with distance as measured by each brand was also different. Significant differences were also found in the performance of different brands in open grassland versus deep forest. No significant differences were found within any brand of detector. Though not as sensitive as narrow-band detectors, broad-band models hold an advantage in their ability to identify species where several species are found sympatrically.