903 resultados para Principle component
Resumo:
The school environment plays an important role in shaping adolescent outcomes, and research increasingly demonstrates the need to target the school social context in health promotion programs. This paper describes the research process undertaken to design a school connectedness component of an injury prevention program for early adolescents, Skills for Preventing Injury in Youth (SPIY). The connectedness component takes the form of a professional development workshop for teachers on increasing students’ connectedness to school, and this paper describes the research process used to construct program material. It also describes the methods used to encourage teachers’ implementation of connectedness strategies following program delivery. A multi-stage process of data collection included, (i) surveys with 540 Grade 9 students to examine links between school connectedness and risk-related injury, (ii) a systematic literature review of previously-evaluated school connectedness programs to determine key strategies that encourage implementation fidelity and program effectiveness, and (iii) interviews with 14 high school teachers to understand current use of connectedness strategies and ideas for program design. Findings from each stage are discussed in terms of how results informed the program design. The survey data provided information from which to frame program content, and the results of the systematic review demonstrated effective program strategies. The teacher interview data also provided program content incorporating target participants’ views and aligning with their priorities, which is important to ensure effective implementation of program strategies. A comprehensive design process provides an understanding of methods for, and may encourage, teachers’ future implementation of program strategies.
Resumo:
Late discovery is a term used to describe the experience of discovering the truth of one’s genetic origins as an adult. Following discovery, late discoverers face a lack of recognition and acknowledgment of their concerns from family, friends, community and institutions. They experience pain, anger, loss, grief and frustration. This presentation shares the findings of the first qualitative study of both late discovery of adoptive and donor insemination offspring (heterosexual couple use only) experiences. It is also the first study of late discovery experiences undertaken from an ethical perspective. While this study recruited new participants, it also included an ethical re-analysis of existing late discovery accounts across both practices. The findings of this study (a) draws links between past adoption and current donor insemination (heterosexual couple only) practices, (b) reveals that late discoverers are demanding acknowledgment and recognition of the particularity of their experiences, and (c) offers insights into conceptual understandings of the ‘best interests of the child’ principle. These insights derive from the lived experiences of those whose biological and social worlds have been sundered and secrecy and denial of difference used to conceal this. It suggests that acknowledging the equal moral status of the child may be useful in strengthening conceptual understandings of the ‘best interests of the child’ principle. This equal moral status involves ensuring that personal autonomy and the ability to exercise free will is protected; that the integrity of the relationships of trust expected and demanded between parent/s and children is defended and supported; and that equal access to normative socio-cultural practices, that is; non-fictionalised birth certificates and open records, is guaranteed.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
Localized planar patterns arise in many reaction-diffusion models. Most of the paradigm equations that have been studied so far are two-component models. While stationary localized structures are often found to be stable in such systems, travelling patterns either do not exist or are found to be unstable. In contrast, numerical simulations indicate that localized travelling structures can be stable in three-component systems. As a first step towards explaining this phenomenon, a planar singularly perturbed three-component reaction-diffusion system that arises in the context of gas-discharge systems is analysed in this paper. Using geometric singular perturbation theory, the existence and stability regions of radially symmetric stationary spot solutions are delineated and, in particular, stable spots are shown to exist in appropriate parameter regimes. This result opens up the possibility of identifying and analysing drift and Hopf bifurcations, and their criticality, from the stationary spots described here.
Resumo:
The three-component reaction-diffusion system introduced in [C. P. Schenk et al., Phys. Rev. Lett., 78 (1997), pp. 3781–3784] has become a paradigm model in pattern formation. It exhibits a rich variety of dynamics of fronts, pulses, and spots. The front and pulse interactions range in type from weak, in which the localized structures interact only through their exponentially small tails, to strong interactions, in which they annihilate or collide and in which all components are far from equilibrium in the domains between the localized structures. Intermediate to these two extremes sits the semistrong interaction regime, in which the activator component of the front is near equilibrium in the intervals between adjacent fronts but both inhibitor components are far from equilibrium there, and hence their concentration profiles drive the front evolution. In this paper, we focus on dynamically evolving N-front solutions in the semistrong regime. The primary result is use of a renormalization group method to rigorously derive the system of N coupled ODEs that governs the positions of the fronts. The operators associated with the linearization about the N-front solutions have N small eigenvalues, and the N-front solutions may be decomposed into a component in the space spanned by the associated eigenfunctions and a component projected onto the complement of this space. This decomposition is carried out iteratively at a sequence of times. The former projections yield the ODEs for the front positions, while the latter projections are associated with remainders that we show stay small in a suitable norm during each iteration of the renormalization group method. Our results also help extend the application of the renormalization group method from the weak interaction regime for which it was initially developed to the semistrong interaction regime. The second set of results that we present is a detailed analysis of this system of ODEs, providing a classification of the possible front interactions in the cases of $N=1,2,3,4$, as well as how front solutions interact with the stationary pulse solutions studied earlier in [A. Doelman, P. van Heijster, and T. J. Kaper, J. Dynam. Differential Equations, 21 (2009), pp. 73–115; P. van Heijster, A. Doelman, and T. J. Kaper, Phys. D, 237 (2008), pp. 3335–3368]. Moreover, we present some results on the general case of N-front interactions.
Resumo:
In this article, we analyze the three-component reaction-diffusion system originally developed by Schenk et al. (PRL 78:3781–3784, 1997). The system consists of bistable activator-inhibitor equations with an additional inhibitor that diffuses more rapidly than the standard inhibitor (or recovery variable). It has been used by several authors as a prototype three-component system that generates rich pulse dynamics and interactions, and this richness is the main motivation for the analysis we present. We demonstrate the existence of stationary one-pulse and two-pulse solutions, and travelling one-pulse solutions, on the real line, and we determine the parameter regimes in which they exist. Also, for one-pulse solutions, we analyze various bifurcations, including the saddle-node bifurcation in which they are created, as well as the bifurcation from a stationary to a travelling pulse, which we show can be either subcritical or supercritical. For two-pulse solutions, we show that the third component is essential, since the reduced bistable two-component system does not support them. We also analyze the saddle-node bifurcation in which two-pulse solutions are created. The analytical method used to construct all of these pulse solutions is geometric singular perturbation theory, which allows us to show that these solutions lie in the transverse intersections of invariant manifolds in the phase space of the associated six-dimensional travelling wave system. Finally, as we illustrate with numerical simulations, these solutions form the backbone of the rich pulse dynamics this system exhibits, including pulse replication, pulse annihilation, breathing pulses, and pulse scattering, among others.
Resumo:
In this article, we analyze the stability and the associated bifurcations of several types of pulse solutions in a singularly perturbed three-component reaction-diffusion equation that has its origin as a model for gas discharge dynamics. Due to the richness and complexity of the dynamics generated by this model, it has in recent years become a paradigm model for the study of pulse interactions. A mathematical analysis of pulse interactions is based on detailed information on the existence and stability of isolated pulse solutions. The existence of these isolated pulse solutions is established in previous work. Here, the pulse solutions are studied by an Evans function associated to the linearized stability problem. Evans functions for stability problems in singularly perturbed reaction-diffusion models can be decomposed into a fast and a slow component, and their zeroes can be determined explicitly by the NLEP method. In the context of the present model, we have extended the NLEP method so that it can be applied to multi-pulse and multi-front solutions of singularly perturbed reaction-diffusion equations with more than one slow component. The brunt of this article is devoted to the analysis of the stability characteristics and the bifurcations of the pulse solutions. Our methods enable us to obtain explicit, analytical information on the various types of bifurcations, such as saddle-node bifurcations, Hopf bifurcations in which breathing pulse solutions are created, and bifurcations into travelling pulse solutions, which can be both subcritical and supercritical.
Resumo:
We investigate regions of bistability between different travelling and stationary structures in a planar singularly-perturbed three-component reaction-diffusion system that arises in the context of gas discharge systems. In previous work, we delineated the existence and stabil-ity regions of stationary localized spots in this system. Here, we complement this analysis by establishing the stability regions of planar travelling fronts and stationary stripes. Taken together, these results imply that stable fronts and spots can coexist in three-component systems. Numerical simulations indicate that the stable fronts never move towards stable spots but instead move away from them.
Resumo:
Schizophrenia is often characterised by diminished self-experience. This article describes the development and principles of a manual for a psychotherapeutic treatment model that aims to enhance self-experience in people diagnosed with schizophrenia. Metacognitive Narrative Psychotherapy draws upon dialogical theory of self and the work of Lysaker and colleagues, in conjunction with narrative principles of therapy as operationalised by Vromans. To date, no manual for a metacognitive narrative approach to the treatment of schizophrenia exists. After a brief description of narrative understandings of schizophrenia, the development of the manual is described. Five general phases of treatment are outlined: (1) developing a therapeutic relationship; (2) eliciting narratives; (3) enhancing metacognitive capacity; (4) enriching narratives, and; (5) living enriched narratives. Proscribed practices are also described. Examples of therapeutic interventions and dialogue are provided to further explain the application of interventions in-session. The manual has been piloted in a study investigating the effectiveness of Metacognitive Narrative Psychotherapy in the treatment of people diagnosed with schizophrenia spectrum disorders.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
The lack of an obvious “band gap” is a formidable hurdle for making a nanotransistor from graphene. Here, we use density functional calculations to demonstrate for the first time that porosity such as evidenced in recently synthesized porous graphene (http://www.sciencedaily.com/releases/2009/11/091120084337.htm) opens a band gap. The size of the band gap (3.2 eV) is comparable to most popular photocatalytic titania and graphitic C3N4 materials. In addition, the adsorption of hydrogen on Li-decorated porous graphene is much stronger than that in regular Li-doped graphene due to the natural separation of Li cations, leading to a potential hydrogen storage gravimetric capacity of 12 wt %. In light of the most recent experimental progress on controlled synthesis, these results uncover new potential for the practical application of porous graphene in nanoelectronics and clean energy.
Resumo:
Undergraduate programs can play an important role in the development of individuals wanting professional employment within statutory child protection agencies: both the coursework and the work-integrated learning (WIL) components of degrees have a role in this process. This paper uses a collective case study methodology to examine the perceptions and experiences of first year practitioners within a specific statutory child protection agency in order to identify if they felt prepared for their current role. The sample of 20 participants came from a range of discipline backgrounds with just over half of the sample (55 per cent) completing a WIL placement as part of their undergraduate studies. The results indicate that while some participants were able to identify and articulate specific benefits from their undergraduate coursework studies all participants who had undertaken a WIL placement as part of their degree believed the WIL placement was beneficial for their current work.
Resumo:
Most existing research on maintenance optimisation for multi-component systems only considers the lifetime distribution of the components. When the condition-based maintenance (CBM) strategy is adopted for multi-component systems, the strategy structure becomes complex due to the large number of component states and their combinations. Consequently, some predetermined maintenance strategy structures are often assumed before the maintenance optimisation of a multi-component system in a CBM context. Developing these predetermined strategy structure needs expert experience and the optimality of these strategies is often not proofed. This paper proposed a maintenance optimisation method that does not require any predetermined strategy structure for a two-component series system. The proposed method is developed based on the semi-Markov decision process (SMDP). A simulation study shows that the proposed method can identify the optimal maintenance strategy adaptively for different maintenance costs and parameters of degradation processes. The optimal maintenance strategy structure is also investigated in the simulation study, which provides reference for further research in maintenance optimisation of multi-component systems.
Resumo:
This paper provides an overview of the regulatory developments in the UK which impact on the use of in vitro fertilization (IVF) and embryo screening techniques for the creation of “saviour siblings.” Prior to the changes implemented under the Human Fertilisation and Embryology Act 2008, this specific use of IVF was not addressed by the legislative framework and regulated only by way of policy issued by the Human Fertilisation and Embryology Authority (HFEA). Following the implementation of the statutory reforms, a number of restrictive conditions are now imposed on the face of the legislation. This paper considers whether there is any justification for restricting access to IVF and pre-implantation tissue typing for the creation of “saviour siblings.” The analysis is undertaken by examining the normative factors that have guided the development of the UK regulatory approach prior to the 2008 legislative reforms. The approach adopted in relation to the “saviour sibling” issue is compared to more general HFEA policy, which has prioritized the notion of reproductive choice and determined that restrictions on access are only justified on the basis of harm considerations.