895 resultados para SEQUENTIAL CONVERGENCE
Resumo:
Research question / issue This paper frames the debate on corporate governance convergence in terms of the morality underlying corporate governance models. The claims and arguments of moral relativism are presented to provide theoretical structure to the moral aspects of corporate governance convergence, and ultimately the normative question of whether convergence should occur. Research findings / insights: The morality underlying different models of corporate governance has largely been ignored in the corporate governance convergence literature. A range of moral philosophies and principles that underlie the dominant corporate governance models are identified. This leads to a consideration of the claims and arguments of moral relativism relating to corporate governance. A research agenda around the claims of Descriptive and Metaethical moral relativism, and which ultimately informs the associated normative argument, is then suggested. Theoretical / Academic implications The application of moral relativism to the debate on corporate governance convergence presents a theoretical structure to the analysis and consideration of its moral aspects. This structure lends itself to further research, both empirical and conceptual. Practitioner / Policy implications The claims and arguments of moral relativism provide a means of analysing calls that are made for a culturally or nationally ‘appropriate’ model of corporate governance. This can assist in providing direction for corporate governance reforms and is of particular relevance for developing countries which have inherited Western corporate governance models through colonialism.
Resumo:
A new transdimensional Sequential Monte Carlo (SMC) algorithm called SM- CVB is proposed. In an SMC approach, a weighted sample of particles is generated from a sequence of probability distributions which ‘converge’ to the target distribution of interest, in this case a Bayesian posterior distri- bution. The approach is based on the use of variational Bayes to propose new particles at each iteration of the SMCVB algorithm in order to target the posterior more efficiently. The variational-Bayes-generated proposals are not limited to a fixed dimension. This means that the weighted particle sets that arise can have varying dimensions thereby allowing us the option to also estimate an appropriate dimension for the model. This novel algorithm is outlined within the context of finite mixture model estimation. This pro- vides a less computationally demanding alternative to using reversible jump Markov chain Monte Carlo kernels within an SMC approach. We illustrate these ideas in a simulated data analysis and in applications.
Resumo:
Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization method based on the social behaviors of birds flocking or fish schooling. Although, PSO is represented in solving many well-known numerical test problems, but it suffers from the premature convergence. A number of basic variations have been developed due to solve the premature convergence problem and improve quality of solution founded by the PSO. This study presents a comprehensive survey of the various PSO-based algorithms. As part of this survey, the authors have included a classification of the approaches and they have identify the main features of each proposal. In the last part of the study, some of the topics within this field that are considered as promising areas of future research are listed.
Resumo:
Introduction Clinically, the Cobb angle method measures the overall scoliotic curve in the coronal plane but does not measure individual vertebra and disc wedging. The contributions of the vertebrae and discs in the growing scoliotic spine were measured to investigate coronal plane deformity progression with growth. Methods A 0.49mm isotropic 3D MRI technique was developed to investigate the level-by-level changes that occur in the growing spine of a group of Adolescent Idiopathic Scoliosis (AIS) patients, who received two to four sequential scans (spaced 3-12 months apart). The coronal plane wedge angles of each vertebra and disc in the major curve were measured to capture any changes that occurred during their adolescent growth phase. Results Seventeen patients had at least two scans. Mean patient age was 12.9 years (SD 1.5 years). Sixteen were classified as right-sided major thoracic Lenke Type 1 (one left sided). Mean standing Cobb angle at initial presentation was 31° (SD 12°). Six received two scans, nine three scans and two four scans, with 65% showing a Cobb angle progression of 5° or more between scans. Overall, there was no clear pattern of deformity progression of individual vertebrae and discs, nor between patients who progressed and those who didn’t. There were measurable changes in the wedging of the vertebrae and discs in all patients. In sequential scans, change in direction of wedging was also seen. In several patients there was reverse wedging in the discs that counteracted increased wedging of the vertebrae such that no change in overall Cobb angle was seen. Conclusion Sequential MRI data showed complex patterns of deformity progression. Changes to the wedging of individual vertebrae and discs may occur in patients who have no increase in Cobb angle measure; the Cobb method alone may be insufficient to capture the complex mechanisms of deformity progression.
Resumo:
INTRODUCTION. Clinically, the Cobb angle method measures the overall scoliotic curve in the coronal plane but does not measure individual vertebra and disc wedging. The contributions of the vertebrae and discs in the growing scoliotic spine were measured to investigate coronal plane deformity progression with growth. METHODS. A 0.49mm isotropic 3D MRI technique was developed to investigate the level-by-level changes that occur in the growing spine of a group of Adolescent Idiopathic Scoliosis (AIS) patients, who received two to four sequential scans (spaced 3-12 months apart). The coronal plane wedge angles of each vertebra and disc in the major curve were measured to capture any changes that occurred during their adolescent growth phase. RESULTS. Seventeen patients had at least two scans. Mean patient age was 12.9 years (SD 1.5 years). Sixteen were classified as right-sided major thoracic Lenke Type 1 (one left sided). Mean standing Cobb angle at initial presentation was 31° (SD 12°). Six received two scans, nine three scans and two four scans, with 65% showing a Cobb angle progression of 5° or more between scans. Overall, there was no clear pattern of deformity progression of individual vertebrae and discs, nor between patients who progressed and those who didn’t. There were measurable changes in the wedging of the vertebrae and discs in all patients. In sequential scans, change in direction of wedging was also seen. In several patients there was reverse wedging in the discs that counteracted increased wedging of the vertebrae such that no change in overall Cobb angle was seen. CONCLUSION. Sequential MRI data showed complex patterns of deformity progression. Changes to the wedging of individual vertebrae and discs may occur in patients who have no increase in Cobb angle measure; the Cobb method alone may be insufficient to capture the complex mechanisms of deformity progression.
Resumo:
Clinically, the Cobb angle method measures the overall scoliotic curve in the coronal plane but does not measure individual vertebra and disc wedging. The contributions of the vertebrae and discs in the growing scoliotic spine were measured to investigate coronal plane deformity progression with growth. Sequential MRI data in this project showed complex patterns of deformity progression. Changes to the wedging of individual vertebrae and discs may occur in patients who have no increase in Cobb angle measure; the Cobb method alone may be insufficient to capture the complex mechanisms of deformity progression.
Resumo:
In order to progress beyond currently available medical devices and implants, the concept of tissue engineering has moved into the centre of biomedical research worldwide. The aim of this approach is not to replace damaged tissue with an implant or device but rather to prompt the patient's own tissue to enact a regenerative response by using a tissue-engineered construct to assemble new functional and healthy tissue. More recently, it has been suggested that the combination of Synthetic Biology and translational tissue-engineering techniques could enhance the field of personalized medicine, not only from a regenerative medicine perspective, but also to provide frontier technologies for building and transforming the research landscape in the field of in vitro and in vivo disease models.
Resumo:
Background: Haemodialysis nurses form long term relationships with patients in a technologically complex work environment. Previous studies have highlighted that haemodialysis nurses face stressors related to the nature of their work and also their work environments leading to reported high levels of burnout. Using Kanters (1997) Structural Empowerment Theory as a guiding framework, the aim of this study was to explore the factors contributing to satisfaction with the work environment, job satisfaction, job stress and burnout in haemodialysis nurses. Methods: Using a sequential mixed-methods design, the first phase involved an on-line survey comprising demographic and work characteristics, Brisbane Practice Environment Measure (B-PEM), Index of Work Satisfaction(IWS), Nursing Stress Scale (NSS) and the Maslach Burnout Inventory (MBI). The second phase involved conducting eight semi-structured interviews with data thematically analyzed. Results: From the 417 nurses surveyed the majority were female (90.9 %), aged over 41 years of age (74.3 %), and 47.4 % had worked in haemodialysis for more than 10 years. Overall the work environment was perceived positively and there was a moderate level of job satisfaction. However levels of stress and emotional exhaustion (burnout) were high. Two themes, ability to care and feeling successful as a nurse, provided clarity to the level of job satisfaction found in phase 1. While two further themes, patients as quasi-family and intense working teams, explained why working as a haemodialysis nurse was both satisfying and stressful. Conclusions: Nurse managers can use these results to identify issues being experienced by haemodialysis nurses working in the unit they are supervising.
Resumo:
A simple cconversence technique is applied to obtain accurate estimates of critical temperatures and critical it\ponmts of a few two- and threpdiniensional king models. When applied to the virial series for hard spheres and hard discs, this method predicts a divergence of the equation-of-state at the density of closest packing.
Resumo:
We propose an iterative estimating equations procedure for analysis of longitudinal data. We show that, under very mild conditions, the probability that the procedure converges at an exponential rate tends to one as the sample size increases to infinity. Furthermore, we show that the limiting estimator is consistent and asymptotically efficient, as expected. The method applies to semiparametric regression models with unspecified covariances among the observations. In the special case of linear models, the procedure reduces to iterative reweighted least squares. Finite sample performance of the procedure is studied by simulations, and compared with other methods. A numerical example from a medical study is considered to illustrate the application of the method.
Resumo:
Summary. Interim analysis is important in a large clinical trial for ethical and cost considerations. Sometimes, an interim analysis needs to be performed at an earlier than planned time point. In that case, methods using stochastic curtailment are useful in examining the data for early stopping while controlling the inflation of type I and type II errors. We consider a three-arm randomized study of treatments to reduce perioperative blood loss following major surgery. Owing to slow accrual, an unplanned interim analysis was required by the study team to determine whether the study should be continued. We distinguish two different cases: when all treatments are under direct comparison and when one of the treatments is a control. We used simulations to study the operating characteristics of five different stochastic curtailment methods. We also considered the influence of timing of the interim analyses on the type I error and power of the test. We found that the type I error and power between the different methods can be quite different. The analysis for the perioperative blood loss trial was carried out at approximately a quarter of the planned sample size. We found that there is little evidence that the active treatments are better than a placebo and recommended closure of the trial.
Resumo:
Suppose two treatments with binary responses are available for patients with some disease and that each patient will receive one of the two treatments. In this paper we consider the interests of patients both within and outside a trial using a Bayesian bandit approach and conclude that equal allocation is not appropriate for either group of patients. It is suggested that Gittins indices should be used (using an approach called dynamic discounting by choosing the discount rate based on the number of future patients in the trial) if the disease is rare, and the least failures rule if the disease is common. Some analytical and simulation results are provided.
Resumo:
An adaptive learning scheme, based on a fuzzy approximation to the gradient descent method for training a pattern classifier using unlabeled samples, is described. The objective function defined for the fuzzy ISODATA clustering procedure is used as the loss function for computing the gradient. Learning is based on simultaneous fuzzy decisionmaking and estimation. It uses conditional fuzzy measures on unlabeled samples. An exponential membership function is assumed for each class, and the parameters constituting these membership functions are estimated, using the gradient, in a recursive fashion. The induced possibility of occurrence of each class is useful for estimation and is computed using 1) the membership of the new sample in that class and 2) the previously computed average possibility of occurrence of the same class. An inductive entropy measure is defined in terms of induced possibility distribution to measure the extent of learning. The method is illustrated with relevant examples.
Resumo:
The paper deals with the basic problem of adjusting a matrix gain in a discrete-time linear multivariable system. The object is to obtain a global convergence criterion, i.e. conditions under which a specified error signal asymptotically approaches zero and other signals in the system remain bounded for arbitrary initial conditions and for any bounded input to the system. It is shown that for a class of up-dating algorithms for the adjustable gain matrix, global convergence is crucially dependent on a transfer matrix G(z) which has a simple block diagram interpretation. When w(z)G(z) is strictly discrete positive real for a scalar w(z) such that w-1(z) is strictly proper with poles and zeros within the unit circle, an augmented error scheme is suggested and is proved to result in global convergence. The solution avoids feeding back a quadratic term as recommended in other schemes for single-input single-output systems.
Resumo:
Systems of learning automata have been studied by various researchers to evolve useful strategies for decision making under uncertainity. Considered in this paper are a class of hierarchical systems of learning automata where the system gets responses from its environment at each level of the hierarchy. A classification of such sequential learning tasks based on the complexity of the learning problem is presented. It is shown that none of the existing algorithms can perform in the most general type of hierarchical problem. An algorithm for learning the globally optimal path in this general setting is presented, and its convergence is established. This algorithm needs information transfer from the lower levels to the higher levels. Using the methodology of estimator algorithms, this model can be generalized to accommodate other kinds of hierarchical learning tasks.