942 resultados para Modal interval analysis
Resumo:
The authors evaluated the efficacy of cholinergic drugs in the treatment of neuroleptic-induced tardive dyskinesia (TD) by a systematic review of the literature on the following agents: choline, lecithin, physostigmine, tacrine, 7-methoxyacridine, ipidacrine, galantamine, donepezil, rivastigmine, eptastigmine, metrifonate, arecoline, RS 86, xanomeline, cevimeline, deanol, and meclofenoxate. All relevant randomized controlled trials, without any language or year limitations, were obtained from the Cochrane Schizophrenia Group's Register of Trials. Trials were classified according to their methodological quality. For binary and continuous data, relative risks (RR) and weighted or standardized mean differences (SMD) were calculated, respectively. Eleven trials with a total of 261 randomized patients were included in the meta-analysis. Cholinergic drugs showed a minor trend for improvement of tardive dyskinesia symptoms, but results were not statistically significant (RR 0.84, 95% confidence interval (CI) 0.68 to 1.04, p=0.11). Despite an extensive search of the literature, eligible data for the meta-analysis were few and no results reached statistical significance. In conclusion, we found no evidence to support administration of the old cholinergic agents lecithin, deanol, and meclofenoxate to patients with tardive dyskinesia. In addition, two trials were found on novel cholinergic Alzheimer drugs in tardive dyskinesia, one of which was ongoing. Further investigation of the clinical effects of novel cholinergic agents in tardive dyskinesia is warranted. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Study objective: To investigate the association between cold periods and coronary events, and the extent to which climate, sex, age, and previous cardiac history increase risk during cold weather. Design: A hierarchical analyses of populations from the World Health Organisation's MONICA project. Setting: Twenty four populations from the WHO's MONICA project, a 21 country register made between 1980 and 1995. Patients: People aged 35 - 64 years who had a coronary event. Main results: Daily rates of coronary events were correlated with the average temperature over the current and previous three days. In cold periods, coronary event rates increased more in populations living in warm climates than in populations living in cold climates, where the increases were slight. The increase was greater in women than in men, especially in warm climates. On average, the odds for women having an event in the cold periods were 1.07 higher than the odds for men (95% posterior interval: 1.03 to 1.11). The effects of cold periods were similar in those with and without a history of a previous myocardial infarction. Conclusions: Rates of coronary events increased during comparatively cold periods, especially in warm climates. The smaller increases in colder climates suggest that some events in warmer climates are preventable. It is suggested that people living in warm climates, particularly women, should keep warm on cold days.
Resumo:
The aim of this study was to apply multifailure survival methods to analyze time to multiple occurrences of basal cell carcinoma (BCC). Data from 4.5 years of follow-up in a randomized controlled trial, the Nambour Skin Cancer Prevention Trial (1992-1996), to evaluate skin cancer prevention were used to assess the influence of sunscreen application on the time to first BCC and the time to subsequent BCCs. Three different approaches of time to ordered multiple events were applied and compared: the Andersen-Gill, Wei-Lin-Weissfeld, and Prentice-Williams-Peterson models. Robust variance estimation approaches were used for all multifailure survival models. Sunscreen treatment was not associated with time to first occurrence of a BCC (hazard ratio = 1.04, 95% confidence interval: 0.79, 1.45). Time to subsequent BCC tumors using the Andersen-Gill model resulted in a lower estimated hazard among the daily sunscreen application group, although statistical significance was not reached (hazard ratio = 0.82, 95% confidence interval: 0.59, 1.15). Similarly, both the Wei-Lin-Weissfeld marginal-hazards and the Prentice-Williams-Peterson gap-time models revealed trends toward a lower risk of subsequent BCC tumors among the sunscreen intervention group. These results demonstrate the importance of conducting multiple-event analysis for recurring events, as risk factors for a single event may differ from those where repeated events are considered.
Resumo:
QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulae for obtaining MLEs via the expectation maximization (EM) algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.
Resumo:
We construct a set of functions, say, psi([r])(n) composed of a cosine function and a sigmoidal transformation gamma(r) of order r > 0. The present functions are orthonormal with respect to a proper weight function on the interval [-1, 1]. It is proven that if a function f is continuous and piecewise smooth on [-1, 1] then its series expansion based on psi([r])(n) converges uniformly to f so long as the order of the sigmoidal transformation employed is 0 < r
Resumo:
Simultaneous analysis of handedness data from 35 samples of twins (with a combined sample size of 21,127 twin pairs) found a small but significant additive genetic effect accounting for 25.47% of the variance (95% confidence interval [CI] 15.69-29.51%). No common environmental influences were detected (C = 0.00; 95% Cl 0.00-7.67%), with the majority of the variance, 74.53%, explained by factors unique to the individual (95% Cl 70.49-78.67%). No significant heterogeneity was observed within studies that used similar methods to assess handedness, or across studies that used different methods. At an individual level the majority of studies had insufficient power to reject a purely unique environmental model due to insufficient power to detect familial aggregation. This lack of power is seldom mentioned within studies, and has contributed to the misconception that twin studies of handedness are not informative.
Resumo:
We consider a problem of robust performance analysis of linear discrete time varying systems on a bounded time interval. The system is represented in the state-space form. It is driven by a random input disturbance with imprecisely known probability distribution; this distributional uncertainty is described in terms of entropy. The worst-case performance of the system is quantified by its a-anisotropic norm. Computing the anisotropic norm is reduced to solving a set of difference Riccati and Lyapunov equations and a special form equation.
Resumo:
Among the Solar System’s bodies, Moon, Mercury and Mars are at present, or have been in the recent years, object of space missions aimed, among other topics, also at improving our knowledge about surface composition. Between the techniques to detect planet’s mineralogical composition, both from remote and close range platforms, visible and near-infrared reflectance (VNIR) spectroscopy is a powerful tool, because crystal field absorption bands are related to particular transitional metals in well-defined crystal structures, e.g., Fe2+ in M1 and M2 sites of olivine or pyroxene (Burns, 1993). Thanks to the improvements in the spectrometers onboard the recent missions, a more detailed interpretation of the planetary surfaces can now be delineated. However, quantitative interpretation of planetary surface mineralogy could not always be a simple task. In fact, several factors such as the mineral chemistry, the presence of different minerals that absorb in a narrow spectral range, the regolith with a variable particle size range, the space weathering, the atmosphere composition etc., act in unpredictable ways on the reflectance spectra on a planetary surface (Serventi et al., 2014). One method for the interpretation of reflectance spectra of unknown materials involves the study of a number of spectra acquired in the laboratory under different conditions, such as different mineral abundances or different particle sizes, in order to derive empirical trends. This is the methodology that has been followed in this PhD thesis: the single factors previously listed have been analyzed, creating, in the laboratory, a set of terrestrial analogues with well-defined composition and size. The aim of this work is to provide new tools and criteria to improve the knowledge of the composition of planetary surfaces. In particular, mixtures composed with different content and chemistry of plagioclase and mafic minerals have been spectroscopically analyzed at different particle sizes and with different mineral relative percentages. The reflectance spectra of each mixture have been analyzed both qualitatively (using the software ORIGIN®) and quantitatively applying the Modified Gaussian Model (MGM, Sunshine et al., 1990) algorithm. In particular, the spectral parameter variations of each absorption band have been evaluated versus the volumetric FeO% content in the PL phase and versus the PL modal abundance. This delineated calibration curves of composition vs. spectral parameters and allow implementation of spectral libraries. Furthermore, the trends derived from terrestrial analogues here analyzed and from analogues in the literature have been applied for the interpretation of hyperspectral images of both plagioclase-rich (Moon) and plagioclase-poor (Mars) bodies.
Resumo:
OBJECTIVES: To assess whether blood pressure control in primary care could be improved with the use of patient held targets and self monitoring in a practice setting, and to assess the impact of these on health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences, and costs. DESIGN: Randomised controlled trial. SETTING: Eight general practices in south Birmingham. PARTICIPANTS: 441 people receiving treatment in primary care for hypertension but not controlled below the target of < 140/85 mm Hg. INTERVENTIONS: Patients in the intervention group received treatment targets along with facilities to measure their own blood pressure at their general practice; they were also asked to visit their general practitioner or practice nurse if their blood pressure was repeatedly above the target level. Patients in the control group received usual care (blood pressure monitoring by their practice). MAIN OUTCOME MEASURES: Primary outcome: change in systolic blood pressure at six months and one year in both intervention and control groups. Secondary outcomes: change in health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences of method of blood pressure monitoring, and costs. RESULTS: 400 (91%) patients attended follow up at one year. Systolic blood pressure in the intervention group had significantly reduced after six months (mean difference 4.3 mm Hg (95% confidence interval 0.8 mm Hg to 7.9 mm Hg)) but not after one year (mean difference 2.7 mm Hg (- 1.2 mm Hg to 6.6 mm Hg)). No overall difference was found in diastolic blood pressure, anxiety, health behaviours, or number of prescribed drugs. Patients who self monitored lost more weight than controls (as evidenced by a drop in body mass index), rated self monitoring above monitoring by a doctor or nurse, and consulted less often. Overall, self monitoring did not cost significantly more than usual care (251 pounds sterling (437 dollars; 364 euros) (95% confidence interval 233 pounds sterling to 275 pounds sterling) versus 240 pounds sterling (217 pounds sterling to 263 pounds sterling). CONCLUSIONS: Practice based self monitoring resulted in small but significant improvements of blood pressure at six months, which were not sustained after a year. Self monitoring was well received by patients, anxiety did not increase, and there was no appreciable additional cost. Practice based self monitoring is feasible and results in blood pressure control that is similar to that in usual care.
Resumo:
This article explains first, the reasons why a knowledge of statistics is necessary and describes the role that statistics plays in an experimental investigation. Second, the normal distribution is introduced which describes the natural variability shown by many measurements in optometry and vision sciences. Third, the application of the normal distribution to some common statistical problems including how to determine whether an individual observation is a typical member of a population and how to determine the confidence interval for a sample mean is described.
Resumo:
An intelligent agent, operating in an external world which cannot be fully described in its internal world model, must be able to monitor the success of a previously generated plan and to respond to any errors which may have occurred. The process of error analysis requires the ability to reason in an expert fashion about time and about processes occurring in the world. Reasoning about time is needed to deal with causality. Reasoning about processes is needed since the direct effects of a plan action can be completely specified when the plan is generated, but the indirect effects cannot. For example, the action `open tap' leads with certainty to `tap open', whereas whether there will be a fluid flow and how long it might last is more difficult to predict. The majority of existing planning systems cannot handle these kinds of reasoning, thus limiting their usefulness. This thesis argues that both kinds of reasoning require a complex internal representation of the world. The use of Qualitative Process Theory and an interval-based representation of time are proposed as a representation scheme for such a world model. The planning system which was constructed has been tested on a set of realistic planning scenarios. It is shown that even simple planning problems, such as making a cup of coffee, require extensive reasoning if they are to be carried out successfully. The final Chapter concludes that the planning system described does allow the correct solution of planning problems involving complex side effects, which planners up to now have been unable to solve.
Resumo:
The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.
Resumo:
The reliability of the printed circuit board assembly under dynamic environments, such as those found onboard airplanes, ships and land vehicles is receiving more attention. This research analyses the dynamic characteristics of the printed circuit board (PCB) supported by edge retainers and plug-in connectors. By modelling the wedge retainer and connector as providing simply supported boundary condition with appropriate rotational spring stiffnesses along their respective edges with the aid of finite element codes, accurate natural frequencies for the board against experimental natural frequencies are obtained. For a PCB supported by two opposite wedge retainers and a plug-in connector and with its remaining edge free of any restraint, it is found that these real supports behave somewhere between the simply supported and clamped boundary conditions and provide a percentage fixity of 39.5% more than the classical simply supported case. By using an eigensensitivity method, the rotational stiffnesses representing the boundary supports of the PCB can be updated effectively and is capable of representing the dynamics of the PCB accurately. The result shows that the percentage error in the fundamental frequency of the PCB finite element model is substantially reduced from 22.3% to 1.3%. The procedure demonstrated the effectiveness of using only the vibration test frequencies as reference data when the mode shapes of the original untuned model are almost identical to the referenced modes/experimental data. When using only modal frequencies in model improvement, the analysis is very much simplified. Furthermore, the time taken to obtain the experimental data will be substantially reduced as the experimental mode shapes are not required.In addition, this thesis advocates a relatively simple method in determining the support locations for maximising the fundamental frequency of vibrating structures. The technique is simple and does not require any optimisation or sequential search algorithm in the analysis. The key to the procedure is to position the necessary supports at positions so as to eliminate the lower modes from the original configuration. This is accomplished by introducing point supports along the nodal lines of the highest possible mode from the original configuration, so that all the other lower modes are eliminated by the introduction of the new or extra supports to the structure. It also proposes inspecting the average driving point residues along the nodal lines of vibrating plates to find the optimal locations of the supports. Numerical examples are provided to demonstrate its validity. By applying to the PCB supported on its three sides by two wedge retainers and a connector, it is found that a single point constraint that would yield maximum fundamental frequency is located at the mid-point of the nodal line, namely, node 39. This point support has the effect of increasing the structure's fundamental frequency from 68.4 Hz to 146.9 Hz, or 115% higher.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.