529 resultados para Stopping.
Resumo:
Rheumatic heart disease (RHD) is the largest cardiac cause of morbidity and mortality in the world's youth. Early detection of RHD through echocardiographic screening in asymptomatic children may identify an early stage of disease, when secondary prophylaxis has the greatest chance of stopping disease progression. Latent RHD signifies echocardiographic evidence of RHD with no known history of acute rheumatic fever and no clinical symptoms.
OBJECTIVE: Determine the prevalence of latent RHD among children ages 5-16 in Lilongwe, Malawi.
DESIGN: This is a cross-sectional study in which children ages 5 through 16 were screened for RHD using echocardiography.
SETTING: Screening was conducted in 3 schools and surrounding communities in the Lilongwe district of Malawi between February and April 2014.
OUTCOME MEASURES: Children were diagnosed as having no, borderline, or definite RHD as defined by World Heart Federation criteria. The primary reader completed offline reads of all studies. A second reader reviewed all of the studies diagnosed as RHD, plus a selection of normal studies. A third reader served as tiebreaker for discordant diagnoses. The distribution of results was compared between gender, location, and age categories using Fisher's exact test.
RESULTS: The prevalence of latent RHD was 3.4% (95% CI = 2.45, 4.31), with 0.7% definite RHD and 2.7% borderline RHD. There was no significant differences in prevalence between gender (P = .44), site (P = .6), urban vs. peri-urban (P = .75), or age (P = .79). Of those with definite RHD, all were diagnosed because of pathologic mitral regurgitation (MR) and 2 morphologic features of the mitral valve. Of those with borderline RHD, most met the criteria by having pathological MR (92.3%).
CONCLUSION: Malawi has a high rate of latent RHD, which is consistent with other results from sub-Saharan Africa. This study strongly supports the need for a RHD prevention and control program in Malawi.
Resumo:
The absolute calibration of a microchannel plate (MCP) assembly using a Thomson spectrometer for laser-driven ion beams is described. In order to obtain the response of the whole detection system to the particles’ impact, a slotted solid state nuclear track detector (CR-39) was installed in front of the MCP to record the ions simultaneously on both detectors. The response of the MCP (counts/particles) was measured for 5–58 MeV carbon ions and for protons in the energy range2–17.3 MeV. The response of the MCP detector is non-trivial when the stopping range of particles becomes larger than the thickness of the detector. Protons with energiesE>~ 10 MeV are energetic enough that they can pass through the MCP detector. Quantitative analysis of the pits formed in CR-39 and the signal generated in the MCP allowed to determine the MCP response to particles in this energy range. Moreover, a theoretical model allows to predict the response of MCP at even higher proton energies. This suggests that in this regime the MCP response is a slowly decreasing function of energy, consistently with the decrease of the deposited energy. These calibration data will enable particle spectra to be obtained in absolute terms over a broad energy range.
Resumo:
OBJECTIVES: The aim of this study was to describe the epidemiology of Ebstein's anomaly in Europe and its association with maternal health and medication exposure during pregnancy.
DESIGN: We carried out a descriptive epidemiological analysis of population-based data.
SETTING: We included data from 15 European Surveillance of Congenital Anomalies Congenital Anomaly Registries in 12 European countries, with a population of 5.6 million births during 1982-2011. Participants Cases included live births, fetal deaths from 20 weeks gestation, and terminations of pregnancy for fetal anomaly. Main outcome measures We estimated total prevalence per 10,000 births. Odds ratios for exposure to maternal illnesses/medications in the first trimester of pregnancy were calculated by comparing Ebstein's anomaly cases with cardiac and non-cardiac malformed controls, excluding cases with genetic syndromes and adjusting for time period and country.
RESULTS: In total, 264 Ebstein's anomaly cases were recorded; 81% were live births, 2% of which were diagnosed after the 1st year of life; 54% of cases with Ebstein's anomaly or a co-existing congenital anomaly were prenatally diagnosed. Total prevalence rose over time from 0.29 (95% confidence interval (CI) 0.20-0.41) to 0.48 (95% CI 0.40-0.57) (p<0.01). In all, nine cases were exposed to maternal mental health conditions/medications (adjusted odds ratio (adjOR) 2.64, 95% CI 1.33-5.21) compared with cardiac controls. Cases were more likely to be exposed to maternal β-thalassemia (adjOR 10.5, 95% CI 3.13-35.3, n=3) and haemorrhage in early pregnancy (adjOR 1.77, 95% CI 0.93-3.38, n=11) compared with cardiac controls.
CONCLUSIONS: The increasing prevalence of Ebstein's anomaly may be related to better and earlier diagnosis. Our data suggest that Ebstein's anomaly is associated with maternal mental health problems generally rather than lithium or benzodiazepines specifically; therefore, changing or stopping medications may not be preventative. We found new associations requiring confirmation.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In recent papers, Wied and his coauthors have introduced change-point procedures to detect and estimate structural breaks in the correlation between time series. To prove the asymptotic distribution of the test statistic and stopping time as well as the change-point estimation rate, they use an extended functional Delta method and assume nearly constant expectations and variances of the time series. In this thesis, we allow asymptotically infinitely many structural breaks in the means and variances of the time series. For this setting, we present test statistics and stopping times which are used to determine whether or not the correlation between two time series is and stays constant, respectively. Additionally, we consider estimates for change-points in the correlations. The employed nonparametric statistics depend on the means and variances. These (nuisance) parameters are replaced by estimates in the course of this thesis. We avoid assuming a fixed form of these estimates but rather we use "blackbox" estimates, i.e. we derive results under assumptions that these estimates fulfill. These results are supplement with examples. This thesis is organized in seven sections. In Section 1, we motivate the issue and present the mathematical model. In Section 2, we consider a posteriori and sequential testing procedures, and investigate convergence rates for change-point estimation, always assuming that the means and the variances of the time series are known. In the following sections, the assumptions of known means and variances are relaxed. In Section 3, we present the assumptions for the mean and variance estimates that we will use for the mean in Section 4, for the variance in Section 5, and for both parameters in Section 6. Finally, in Section 7, a simulation study illustrates the finite sample behaviors of some testing procedures and estimates.
Resumo:
This paper provides an agent-based software exploration of the wellknown free market efficiency/equality trade-off. Our study simulates the interaction of agents producing, trading and consuming goods in the presence of different market structures, and looks at how efficient the producers/consumers mapping turn out to be as well as the resulting distribution of welfare among agents at the end of an arbitrarily large number of iterations. Two market mechanisms are compared: the competitive market (a double auction market in which agents outbid each other in order to buy and sell products) and the random one (in which products are allocated randomly). Our results confirm that the superior efficiency of the competitive market (an effective and never stopping producers/consumers mapping and a superior aggregative welfare) comes at a very high price in terms of inequality (above all when severe budget constraints are in play).
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Ante la crisis mundial (por el fin de la época del Estado-nación y el inicio de la aldea global), resulta prioritario repensar las categorías con las que comprendemos el mundo. En este caso, se invita a la reflexión sobre la normatividad social, para hacer ver que no sólo es jurídica, también lo es ético-moral, entre otras. Incluso, la normatividad jurídica no se limita a un conjunto de principios y preceptos, sino que dispone también de una dimensión subjetiva, relativa a las relaciones jurídicas. Éstas dependen del grado de intervención de la voluntad que, a su vez, se ve influenciada por las normas ético-morales. En definitiva, aquí no sólo se denuncian las falacias del positivismo formalista de Estado, sino que se busca la reafección político-jurídica del sistema (frenando la entropía social en ciernes), además de ofrecerse un paradigma normativo adecuado a laglobalización: el normativismo iberoamericano basado en la ética humanista.
Resumo:
The problem: Around 300 million people worldwide have asthma and prevalence is increasing. Support for optimal self-management can be effective in improving a range of outcomes and is cost effective, but is underutilised as a treatment strategy. Supporting optimum self-management using digital technology shows promise, but how best to do this is not clear. Aim: The purpose of this project was to explore the potential role of a digital intervention in promoting optimum self-management in adults with asthma. Methods: Following the MRC Guidance on the Development and Evaluation of Complex Interventions which advocates using theory, evidence, user testing and appropriate modelling and piloting, this project had 3 phases. Phase 1: Examination of the literature to inform phases 2 and 3, using systematic review methods and focussed literature searching. Phase 2: Developing the Living Well with Asthma website. A prototype (paper-based) version of the website was developed iteratively with input from a multidisciplinary expert panel, empirical evidence from the literature (from phase 1), and potential end users via focus groups (adults with asthma and practice nurses). Implementation and behaviour change theories informed this process. The paper-based designs were converted to the website through an iterative user centred process (think aloud studies with adults with asthma). Participants considered contents, layout, and navigation. Development was agile using feedback from the think aloud sessions immediately to inform design and subsequent think aloud sessions. Phase 3: A pilot randomised controlled trial over 12 weeks to evaluate the feasibility of a Phase 3 trial of Living Well with Asthma to support self-management. Primary outcomes were 1) recruitment & retention; 2) website use; 3) Asthma Control Questionnaire (ACQ) score change from baseline; 4) Mini Asthma Quality of Life (AQLQ) score change from baseline. Secondary outcomes were patient activation, adherence, lung function, fractional exhaled nitric oxide (FeNO), generic quality of life measure (EQ-5D), medication use, prescribing and health services contacts. Results: Phase1: Demonstrated that while digital interventions show promise, with some evidence of effectiveness in certain outcomes, participants were poorly characterised, telling us little about the reach of these interventions. The interventions themselves were poorly described making drawing definitive conclusions about what worked and what did not impossible. Phase 2: The literature indicated that important aspects to cover in any self-management intervention (digital or not) included: asthma action plans, regular health professional review, trigger avoidance, psychological functioning, self-monitoring, inhaler technique, and goal setting. The website asked users to aim to be symptom free. Key behaviours targeted to achieve this include: optimising medication use (including inhaler technique); attending primary care asthma reviews; using asthma action plans; increasing physical activity levels; and stopping smoking. The website had 11 sections, plus email reminders, which promoted these behaviours. Feedback during think aloud studies was mainly positive with most changes focussing on clarification of language, order of pages and usability issues mainly relating to navigation difficulties. Phase 3: To achieve our recruitment target 5383 potential participants were invited, leading to 51 participants randomised (25 to intervention group). Age range 16-78 years; 75% female; 28% from most deprived quintile. Nineteen (76%) of the intervention group used the website for an average of 23 minutes. Non-significant improvements in favour of the intervention group observed in the ACQ score (-0.36; 95% confidence interval: -0.96, 0.23; p=0.225), and mini-AQLQ scores (0.38; -0.13, 0.89; p=0.136). A significant improvement was observed in the activity limitation domain of the mini-AQLQ (0.60; 0.05 to 1.15; p = 0.034). Secondary outcomes showed increased patient activation and reduced reliance on reliever medication. There was no significant difference in the remaining secondary outcomes. There were no adverse events. Conclusion: Living Well with Asthma has been shown to be acceptable to potential end users, and has potential for effectiveness. This intervention merits further development, and subsequent evaluation in a Phase III full scale RCT.
Resumo:
Objectives: We report the unusual case of a patient with a thyrotropinoma, discovered after a hemithyroidectomy for a suspicious thyroid nodule, and its therapeutic challenges. Materials and methods: In a patient who underwent hemithyroidectomy for cold thyroid nodule, hyperthyroid symptoms persisted, despite stopping levothyroxine treatment. Further investigation was carried out through the following laboratory tests: thyroid-stimulating hormone (TSH) test; free thyroxine (fT4) test; and the thyrotropin releasing hormone (TRH) test. A pituitary magnetic resonance imaging (MRI) scan and genetic analysis was also carried out. The test results confirmed the diagnosis of a thyrotropinoma. Results: Treatment with long-acting somatostatin analogues normalised thyroid hormones and symptoms of hyperthyroidism. Conclusion: The diagnostic approach to the thyroid nodule should include a detailed clinical and biochemical examination. Initial biochemical evaluation by TSH alone does not allow detecting inappropriate TSH secretion that may increase the risk of thyroid malignancy. In case of a thyrotropinoma, the ideal treatment consists of combined care of central and peripheral thyroid disease.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
This paper presents a technique called Improved Squeaky Wheel Optimisation (ISWO) for driver scheduling problems. It improves the original Squeaky Wheel Optimisation’s (SWO) effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported.