948 resultados para Model driven developments
Resumo:
The neurodevelopmental hypothesis (NDH) of schizophrenia suggests that a disruption of brain development during early life underlies the later emergence of psychosis during adulthood. The aim of this review is to chart the challenges and subsequent refinements to this hypothesis, with particular reference to the static versus progressive nature of the putative neurobiological processes underlying the NDH. A non-systematic literature review was undertaken, with an emphasis on major review papers relevant to the NDH. Weaknesses in the explanatory power of the NDH have led to a new generation of more refined hypotheses in recent years. In particular, recent versions of the hypothesis have incorporated evidence from structural neuroimaging which suggests changes in brain volumes after the onset of schizophrenia. More detailed models that incorporate progressive neurobiological processes have replaced early versions of the NDH, which were based on a 'static encephalopathy. In addition, recent models have suggested that two or more 'hits' are required over the lifespan rather than only one early-life event. Animal models are providing important insights into the sequelae of disturbed early brain development. The NDH has provided great impetus to the schizophrenia research community. Recent versions of the hypothesis have encouraged more focused and testable hypotheses.
Resumo:
The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
While binge drinking-episodic or irregular consumption of excessive amounts of alcohol-is recognised as a serious problem affecting our youth, to date there has been a lack of psychological theory and thus theoretically driven research into this problem. The current paper develops a cognitive model using the key constructs of alcohol expectancies (AEs) and drinking refusal self-efficacy (DRSE) to explain the acquisition and maintenance of binge drinking. It is suggested that the four combinations of the AE and DRSE can explain the four drinking styles. These are normal/social drinkers, binge drinkers, regular heavy drinkers, and problem drinkers or alcoholics. Since AE and DRSE are cognitive constructs and therefore modifiable, the cognitive model can thus facilitate the design of intervention and-prevention strategies for binge drinking. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A systematic goal-driven top-down modelling methodology is proposed that is capable of developing a multiscale model of a process system for given diagnostic purposes. The diagnostic goal-set and the symptoms are extracted from HAZOP analysis results, where the possible actions to be performed in a fault situation are also described. The multiscale dynamic model is realized in the form of a hierarchical coloured Petri net by using a novel substitution place-transition pair. Multiscale simulation that focuses automatically on the fault areas is used to predict the effect of the proposed preventive actions. The notions and procedures are illustrated on some simple case studies including a heat exchanger network and a more complex wet granulation process.
Resumo:
Sorghum is the main dryland summer crop in NE Australia and a number of agricultural businesses would benefit from an ability to forecast production likelihood at regional scale. In this study we sought to develop a simple agro-climatic modelling approach for predicting shire (statistical local area) sorghum yield. Actual shire yield data, available for the period 1983-1997 from the Australian Bureau of Statistics, were used to train the model. Shire yield was related to a water stress index (SI) that was derived from the agro-climatic model. The model involved a simple fallow and crop water balance that was driven by climate data available at recording stations within each shire. Parameters defining the soil water holding capacity, maximum number of sowings (MXNS) in any year, planting rainfall requirement, and critical period for stress during the crop cycle were optimised as part of the model fitting procedure. Cross-validated correlations (CVR) ranged from 0.5 to 0.9 at shire scale. When aggregated to regional and national scales, 78-84% of the annual variation in sorghum yield was explained. The model was used to examine trends in sorghum productivity and the approach to using it in an operational forecasting system was outlined. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Theoretical developments as well as field and laboratory data have shown the influence of the capillary fringe on water table fluctuations to increase with the fluctuation frequency. The numerical solution of a full, partially saturated flow equation can be computationally expensive. In this paper, the influence of the capillary fringe on water table fluctuations is simplified through its parameterisation into the storage coefficient of a fully-saturated groundwater flow model using the complex effective porosity concept [Nielsen, P., Perrochet, P., 2000. Water table dynamics under capillary fringes: experiments and modelling. Advances in Water Resources 23 (1), 503-515; Nielsen, P., Perrochet, P., 2000. ERRATA: water table dynamics under capillary fringes: experiments and modelling (Advances in Water Resources 23 (2000) 503-515). Advances in Water Resources 23, 907-908]. The model is applied to sand flume observations of periodic water table fluctuations induced by simple harmonic forcing across a sloping boundary, analogous to many beach groundwater systems. While not providing information on the moisture distribution within the aquifer, this approach can reasonably predict the water table fluctuations in response to periodic forcing across a sloping boundary. Furthermore, he coupled ground-surface water model accurately predicts the extent of the seepage face formed at the sloping boundary. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The development of new methods of producing hypersonic wind-tunnel flows at increasing velocities during the last few decades is reviewed with attention to airbreathing propulsion, hypervelocity aerodynamics and superorbital aerodynamics. The role of chemical reactions in these flows leads to use of a binary scaling simulation parameter, which can be related to the Reynolds number, and which demands that smaller wind tunnels require higher reservoir pressure levels for simulation of flight phenomena. The use of combustion heated vitiated wind tunnels for propulsive research is discussed, as well as the use of reflected shock tunnels for the same purpose. A flight experiment validating shock-tunnel results is described, and relevant developments in shock tunnel instrumentation are outlined. The use of shock tunnels for hypervelocity testing is reviewed, noting the role of driver gas contamination in determining test time, and presenting examples of air dissociation effects on model flows. Extending the hypervelocity testing range into the superorbital regime with useful test times is seen to be possible by use of expansion tube/tunnels with a free piston driver.
Resumo:
Systemic lupus erythematosus (SLE) is characterised by the production of autoantibodies against ubiquitous antigens, especially nuclear components. Evidence makes it clear that the development of these autoantibodies is an antigen-driven process and that immune complexes involving DNA-containing antigens play a key role in the disease process. In rodents, DNase I is the major endonuclease present in saliva, urine and plasma, where it catalyses the hydrolysis of DNA, and impaired DNase function has been implicated in the pathogenesis of SLE. In this study we have evaluated the effects of transgenic overexpression of murine DNase I endonucleases in vivo in a mouse model of lupus. We generated transgenic mice having T-cells that express either wild-type DNase I (wt. DNase I) or a mutant DNase I ( ash. DNase I), engineered for three new properties - resistance to inhibition by G-actin, resistance to inhibition by physiological saline and hyperactivity compared to wild type. By crossing these transgenic mice with a murine strain that develops SLE we found that, compared to control nontransgenic littermates or wt. DNase I transgenic mice, the ash. DNase I mutant provided significant protection from the development of anti-single-stranded DNA and anti-histone antibodies, but not of renal disease. In summary, this is the first study in vivo to directly test the effects of long-term increased expression of DNase I on the development of SLE. Our results are in line with previous reports on the possible clinical benefits of recombinant DNase I treatment in SLE, and extend them further to the use of engineered DNase I variants with increased activity and resistance to physiological inhibitors.
Resumo:
Heat stroke is a life-threatening condition that can be fatal if not appropriately managed. Although heat stroke has been recognised as a medical condition for centuries, a universally accepted definition of heat stroke is lacking and the pathology of heat stroke is not fully understood. Information derived from autopsy reports and the clinical presentation of patients with heat stroke indicates that hyperthermia, septicaemia, central nervous system impairment and cardiovascular failure play important roles in the pathology of heat stroke. The current models of heat stroke advocate that heat stroke is triggered by hyperthermia but is driven by endotoxaemia. Endotoxaemia triggers the systemic inflammatory response, which can lead to systemic coagulation and haemorrhage, necrosis, cell death and multi-organ failure. However, the current heat stroke models cannot fully explain the discrepancies in high core temperature (Tc) as a trigger of heat stroke within and between individuals. Research on the concept of critical Tc: as a limitation to endurance exercise implies that a high Tc may function as a signal to trigger the protective mechanisms against heat stroke. Athletes undergoing a period of intense training are subjected to a variety of immune and gastrointestinal (GI) disturbances. The immune disturbances include the suppression of immune cells and their functions, suppression of cell-mediated immunity, translocation of lipopolysaccharide (LPS), suppression of anti-LPS antibodies, increased macrophage activity due to muscle tissue damage, and increased concentration of circulating inflammatory and pyrogenic cytokines. Common symptoms of exercise-induced GI disturbances include diarrhoea, vomiting, gastrointestinal bleeding, and cramps, which may increase gut-related LPS translocation. This article discusses the current evidence that supports the argument that these exercise-induced immune and GI disturbances may contribute to the development of endotoxaemia and heat stroke. When endotoxaemia can be tolerated or prevented, continuing exercise and heat exposure will elevate Tc to a higher level (> 42 degrees C), where heat stroke may occur through the direct thermal effects of heat on organ tissues and cells. We also discuss the evidence suggesting that heat stroke may occur through endotoxaemia (heat sepsis), the primary pathway of heat stroke, or hyperthermia, the secondary pathway of heat stroke. The existence of these two pathways of heat stroke and the contribution of exercise-induced immune and GI disturbances in the primary pathway of heat stroke are illustrated in the dual pathway model of heat stroke. This model of heat stroke suggests that prolonged intense exercise suppresses anti-LPS mechanisms, and promotes inflammatory and pyrogenic activities in the pathway of heat stroke.
Resumo:
Experiments to design physical activity programs that optimize their osteogenic potential are difficult to accomplish in humans. The aim of this article is to review the contributions that animal studies have made to knowledge of the loading conditions that are osteogenic to the skeleton during growth, as well as to consider to what extent animal studies fail to provide valid models of physical activity and skeletal maturation. Controlled loading studies demonstrate that static loads are ineffective, and that bone formation is threshold driven and dependent on strain rate, amplitude, and duration of loading. Only a few loading cycles per session are required, and distributed bouts are more osteogenic than sessions of long duration. Finally, animal models fail to inform us of the most appropriate ways to account for the variations in biological maturation that occur in our studies of children and adolescents, requiring the use of techniques for studying human growth and development.
Resumo:
The generative topographic mapping (GTM) model was introduced by Bishop et al. (1998, Neural Comput. 10(1), 215-234) as a probabilistic re- formulation of the self-organizing map (SOM). It offers a number of advantages compared with the standard SOM, and has already been used in a variety of applications. In this paper we report on several extensions of the GTM, including an incremental version of the EM algorithm for estimating the model parameters, the use of local subspace models, extensions to mixed discrete and continuous data, semi-linear models which permit the use of high-dimensional manifolds whilst avoiding computational intractability, Bayesian inference applied to hyper-parameters, and an alternative framework for the GTM based on Gaussian processes. All of these developments directly exploit the probabilistic structure of the GTM, thereby allowing the underlying modelling assumptions to be made explicit. They also highlight the advantages of adopting a consistent probabilistic framework for the formulation of pattern recognition algorithms.
Resumo:
This paper develops and applies an integrated multiple criteria decision making approach to optimize the facility location-allocation problem in the contemporary customer-driven supply chain. Unlike the traditional optimization techniques, the proposed approach, combining the analytic hierarchy process (AHP) and the goal programming (GP) model, considers both quantitative and qualitative factors, and also aims at maximizing the benefits of deliverer and customers. In the integrated approach, the AHP is used first to determine the relative importance weightings or priorities of alternative locations with respect to both deliverer oriented and customer oriented criteria. Then, the GP model, incorporating the constraints of system, resource, and AHP priority is formulated to select the best locations for setting up the warehouses without exceeding the limited available resources. In this paper, a real case study is used to demonstrate how the integrated approach can be applied to deal with the facility location-allocation problem, and it is proved that the integrated approach outperforms the traditional costbased approach.
Resumo:
A review of the extant literature concludes that market-driven intangibles and innovations are increasingly considered to be the most critical firm-specific resources, but also finds a lack of elaboration of which types of these resources are most important. In this paper, we incorporate these observations into a conceptual model and link it to highly developed institutional settings for the model evaluation. From the point of view of firm revenue management, we can anticipate that performance advantages created through deployment of intellectual and relational capital in marketing and innovation are more likely to be superior. In essence, they constitute the integration of organisational intangibles both in cognitive and behavioural level to create an idiosyncratic combination for each firm. Our research findings show feasible paths for sharpening the edge of market-driven intangibles and innovations. We discuss the key results for research and practice.
Resumo:
The inclusion of high-level scripting functionality in state-of-the-art rendering APIs indicates a movement toward data-driven methodologies for structuring next generation rendering pipelines. A similar theme can be seen in the use of composition languages to deploy component software using selection and configuration of collaborating component implementations. In this paper we introduce the Fluid framework, which places particular emphasis on the use of high-level data manipulations in order to develop component based software that is flexible, extensible, and expressive. We introduce a data-driven, object oriented programming methodology to component based software development, and demonstrate how a rendering system with a similar focus on abstract manipulations can be incorporated, in order to develop a visualization application for geospatial data. In particular we describe a novel SAS script integration layer that provides access to vertex and fragment programs, producing a very controllable, responsive rendering system. The proposed system is very similar to developments speculatively planned for DirectX 10, but uses open standards and has cross platform applicability. © The Eurographics Association 2007.
Resumo:
Team reflexivity, or the extent to which teams reflect upon and modify their functioning, has attracted much recent research attention. In the current paper, we identify several predictors as well as consequences of reflexivity by reviewing the last decade of literature on team reflexivity. It is observed that team characteristics such as trust and psychological safety among group members, a shared vision, and diversity as well as leadership style of the team’s supervisor influence the level of reflexivity. In addition, team reflexivity is related to a team’s output in terms of innovation, effectiveness, and creativity. Explanations for these effects are discussed and a model including all current findings is presented.