40 resultados para Grounded theory. GT4CCI. Crosscutting concerns identification.Software modularity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Attention deficit hyperactivity disorder (ADHD) can be treated with stimulant medication such as methylphenidate. Although effective, methylphenidate can cause serious side-effects, including suppressed appetite, growth retardation and sleep problems. A drug holiday is a deliberate interruption of pharmacotherapy for a defined period of time and for a specific clinical purpose, for example for appeasing side-effects. Whilst some international guidelines recommend introducing drug holidays in ADHD treatment, this is not practised routinely. Our aim was to examine the views and experiences of planned drug holidays from methylphenidate with adults who have responsibility for treatment decisions in children and adolescents with ADHD. Method: In-depth interviews were carried out. Child and Adolescent Mental Health Services (CAMHS) practitioners (n=8), General Practitioners (n=8), teachers (n=5), and mothers of children with ADHD (n=4) were interviewed in a UK setting. Interview transcripts were analysed using grounded theory. Results: Methylphenidate eases the experience of the child amid problems at home and at school and once started is mostly continued long-term. Some families do practise short-term drug holidays at weekends and longer-term ones during school holidays. The decision to introduce drug holidays is influenced by the child’s academic progress, the parents’ ability to cope with the child, as well as medication beliefs. Trialling a drug holiday is thought to allow older children to self-assess their ability to manage without medication when they show signs of wanting to discontinue treatment prematurely. Conclusions: Planned drug holidays could address premature treatment cessation by enabling adolescents to assess repercussions under medical supervision.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Asynchronous Optical Sampling (ASOPS) [1,2] and frequency comb spectrometry [3] based on dual Ti:saphire resonators operated in a master/slave mode have the potential to improve signal to noise ratio in THz transient and IR sperctrometry. The multimode Brownian oscillator time-domain response function described by state-space models is a mathematically robust framework that can be used to describe the dispersive phenomena governed by Lorentzian, Debye and Drude responses. In addition, the optical properties of an arbitrary medium can be expressed as a linear combination of simple multimode Brownian oscillator functions. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing the recorded THz transients in the time or frequency domain will be outlined [4,5]. Since a femtosecond duration pulse is capable of persistent excitation of the medium within which it propagates, such approach is perfectly justifiable. Several de-noising routines based on system identification will be shown. Furthermore, specifically developed apodization structures will be discussed. These are necessary because due to dispersion issues, the time-domain background and sample interferograms are non-symmetrical [6-8]. These procedures can lead to a more precise estimation of the complex insertion loss function. The algorithms are applicable to femtosecond spectroscopies across the EM spectrum. Finally, a methodology for femtosecond pulse shaping using genetic algorithms aiming to map and control molecular relaxation processes will be mentioned.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

More than thirty years ago, Wind's seminal review of research in market segmentation culminated with a research agenda for the subject area. In the intervening period, research has focused on the development of segmentation bases and models, segmentation research techniques and the identification of statistically sound solutions. Practical questions about implementation and the integration of segmentation into marketing strategy have received less attention, even though practitioners are known to struggle with the actual practice of segmentation. This special issue is motivated by this tension between theory and practice, which has shaped and continues to influence the research priorities for the field. Although many years may have elapsed since Wind's original research agenda, pressing questions about effectiveness and productivity apparently remain; namely: (i) concerns about the link between segmentation and performance, and its measurement; and (ii) the notion that productivity improvements arising from segmentation are only achievable if the segmentation process is effectively implemented. There were central themes to the call for papers for this special issue, which aims to develop our understanding of segmentation value, productivity and strategies, and managerial issues and implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Question: What are the key physiological and life-history trade-offs responsible for the evolution of different suites of plant traits (strategies) in different environments? Experimental methods: Common-garden experiments were performed on physiologically realistic model plants, evolved in contrasting environments, in computer simulations. This allowed the identification of the trade-offs that resulted in different suites of traits (strategies). The environments considered were: resource rich, low disturbance (competitive); resource poor, low disturbance (stressed); resource rich, high disturbance (disturbed); and stressed environments containing herbivores (grazed). Results: In disturbed environments, plants increased reproduction at the expense of ability to compete for light and nitrogen. In competitive environments, plants traded off reproductive output and leaf production for vertical growth. In stressed environments, plants traded off vertical growth and reproductive output for nitrogen acquisition, contradicting Grime's (2001) theory that slow-growing, competitively inferior strategies are selected in stressed environments. The contradiction is partly resolved by incorporating herbivores into the stressed environment, which selects for increased investment in defence, at the expense of competitive ability and reproduction. Conclusion: Our explicit modelling of trade-offs produces rigorous testable explanations of observed associations between suites of traits and environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper extends the build-operate-transfer (BOT) concession model (BOTCcM) to a new method for identifying a concession period by using bargaining-game theory. Concession period is one of the most important decision variables in arranging a BOT-type contract, and there are few methodologies available for helping to determine the value of this variable. The BOTCcM presents an alternative method by which a group of concession period solutions are produced. Nevertheless, a typical weakness in using BOTCcM is that the model cannot recommend a specific time span for concessionary. This paper introduces a new method called BOT bargaining concession model (BOTBaC) to enable the identification of a specific concession period, which takes into account the bargaining behavior of the two parties concerned in engaging a BOT contract, namely, the investor and the government concerned. The application of BOTBaC is demonstrated through using an example case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes some of the preliminary outcomes of a UK project looking at control education. The focus is on two aspects: (i) the most important control concepts and theories for students doing just one or two courses and (ii) the effective use of software to improve student learning and engagement. There is also some discussion of the correct balance between teaching theory and practise. The paper gives examples from numerous UK universities and some industrial comment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic neural networks (DNNs), which are also known as recurrent neural networks, are often used for nonlinear system identification. The main contribution of this letter is the introduction of an efficient parameterization of a class of DNNs. Having to adjust less parameters simplifies the training problem and leads to more parsimonious models. The parameterization is based on approximation theory dealing with the ability of a class of DNNs to approximate finite trajectories of nonautonomous systems. The use of the proposed parameterization is illustrated through a numerical example, using data from a nonlinear model of a magnetic levitation system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient model identification algorithm for a large class of linear-in-the-parameters models is introduced that simultaneously optimises the model approximation ability, sparsity and robustness. The derived model parameters in each forward regression step are initially estimated via the orthogonal least squares (OLS), followed by being tuned with a new gradient-descent learning algorithm based on the basis pursuit that minimises the l(1) norm of the parameter estimate vector. The model subset selection cost function includes a D-optimality design criterion that maximises the determinant of the design matrix of the subset to ensure model robustness and to enable the model selection procedure to automatically terminate at a sparse model. The proposed approach is based on the forward OLS algorithm using the modified Gram-Schmidt procedure. Both the parameter tuning procedure, based on basis pursuit, and the model selection criterion, based on the D-optimality that is effective in ensuring model robustness, are integrated with the forward regression. As a consequence the inherent computational efficiency associated with the conventional forward OLS approach is maintained in the proposed algorithm. Examples demonstrate the effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new identification algorithm is introduced for the Hammerstein model consisting of a nonlinear static function followed by a linear dynamical model. The nonlinear static function is characterised by using the Bezier-Bernstein approximation. The identification method is based on a hybrid scheme including the applications of the inverse of de Casteljau's algorithm, the least squares algorithm and the Gauss-Newton algorithm subject to constraints. The related work and the extension of the proposed algorithm to multi-input multi-output systems are discussed. Numerical examples including systems with some hard nonlinearities are used to illustrate the efficacy of the proposed approach through comparisons with other approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper proposes a method of performing system identification of a linear system in the presence of bounded disturbances. The disturbances may be piecewise parabolic or periodic functions. The method is demonstrated effectively on two example systems with a range of disturbances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dynamic recurrent neural network (DRNN) that can be viewed as a generalisation of the Hopfield neural network is proposed to identify and control a class of control affine systems. In this approach, the identified network is used in the context of the differential geometric control to synthesise a state feedback that cancels the nonlinear terms of the plant yielding a linear plant which can then be controlled using a standard PID controller.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel partitioned least squares (PLS) algorithm is presented, in which estimates from several simple system models are combined by means of a Bayesian methodology of pooling partial knowledge. The method has the added advantage that, when the simple models are of a similar structure, it lends itself directly to parallel processing procedures, thereby speeding up the entire parameter estimation process by several factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new state estimator algorithm is based on a neurofuzzy network and the Kalman filter algorithm. The major contribution of the paper is recognition of a bias problem in the parameter estimation of the state-space model and the introduction of a simple, effective prefiltering method to achieve unbiased parameter estimates in the state-space model, which will then be applied for state estimation using the Kalman filtering algorithm. Fundamental to this method is a simple prefiltering procedure using a nonlinear principal component analysis method based on the neurofuzzy basis set. This prefiltering can be performed without prior system structure knowledge. Numerical examples demonstrate the effectiveness of the new approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real estate development appraisal is a quantification of future expectations. The appraisal model relies upon the valuer/developer having an understanding of the future in terms of the future marketability of the completed development and the future cost of development. In some cases the developer has some degree of control over the possible variation in the variables, as with the cost of construction through the choice of specification. However, other variables, such as the sale price of the final product, are totally dependent upon the vagaries of the market at the completion date. To try to address the risk of a different outcome to the one expected (modelled) the developer will often carry out a sensitivity analysis on the development. However, traditional sensitivity analysis has generally only looked at the best and worst scenarios and has focused on the anticipated or expected outcomes. This does not take into account uncertainty and the range of outcomes that can happen. A fuller analysis should include examination of the uncertainties in each of the components of the appraisal and account for the appropriate distributions of the variables. Similarly, as many of the variables in the model are not independent, the variables need to be correlated. This requires a standardised approach and we suggest that the use of a generic forecasting software package, in this case Crystal Ball, allows the analyst to work with an existing development appraisal model set up in Excel (or other spreadsheet) and to work with a predetermined set of probability distributions. Without a full knowledge of risk, developers are unable to determine the anticipated level of return that should be sought to compensate for the risk. This model allows the user a better understanding of the possible outcomes for the development. Ultimately the final decision will be made relative to current expectations and current business constraints, but by assessing the upside and downside risks more appropriately, the decision maker should be better placed to make a more informed and “better”.