942 resultados para Variable Sampling Interval Control Charts
Resumo:
Self-control allows an individual to obtain a more preferred outcome by forgoing an immediate interest. Self-control is an advanced cognitive process because it involves the ability to weigh the costs and benefits of impulsive versus restrained behavior, determine the consequences of such behavior, and make decisions based on the most advantageous course of action. Self-control has been thoroughly explored in Old World primates, but less so in New World monkeys. There are many ways to test self-control abilities in non-human primates, including exchange tasks in which an animal must forgo an immediate, less preferred reward to receive a delayed, more preferred reward. I examined the self-control abilities of six capuchin monkeys using a task in which a monkey was given a less preferred food and was required to wait a delay interval to trade the fully intact less preferred food for a qualitatively higher, more preferred food. Partially eaten pieces of the less preferred food were not rewarded, and delay intervals increased on an individual basis based on performance. All six monkeys were successful in inhibiting impulsivity and trading a less preferred food for a more preferred food at the end of a delay interval. The maximum duration each subject postponed gratification instead of responding impulsively was considered their delay tolerance. This study was the first to show that monkeys could inhibit impulsivity in a delay of gratification food exchange task in which the immediate and delayed food options differed qualitatively and a partially eaten less preferred food was not rewarded with the more preferred food at the end of a delay interval. These results show that New World monkeys possess advanced cognitive abilities similar to those of Old World primates.
Resumo:
In biostatistical applications interest often focuses on the estimation of the distribution of a time-until-event variable T. If one observes whether or not T exceeds an observed monitoring time at a random number of monitoring times, then the data structure is called interval censored data. We extend this data structure by allowing the presence of a possibly time-dependent covariate process that is observed until end of follow up. If one only assumes that the censoring mechanism satisfies coarsening at random, then, by the curve of dimensionality, typically no regular estimators will exist. To fight the curse of dimensionality we follow the approach of Robins and Rotnitzky (1992) by modeling parameters of the censoring mechanism. We model the right-censoring mechanism by modeling the hazard of the follow up time, conditional on T and the covariate process. For the monitoring mechanism we avoid modeling the joint distribution of the monitoring times by only modeling a univariate hazard of the pooled monitoring times, conditional on the follow up time, T, and the covariates process, which can be estimated by treating the pooled sample of monitoring times as i.i.d. In particular, it is assumed that the monitoring times and the right-censoring times only depend on T through the observed covariate process. We introduce inverse probability of censoring weighted (IPCW) estimator of the distribution of T and of smooth functionals thereof which are guaranteed to be consistent and asymptotically normal if we have available correctly specified semiparametric models for the two hazards of the censoring process. Furthermore, given such correctly specified models for these hazards of the censoring process, we propose a one-step estimator which will improve on the IPCW estimator if we correctly specify a lower-dimensional working model for the conditional distribution of T, given the covariate process, that remains consistent and asymptotically normal if this latter working model is misspecified. It is shown that the one-step estimator is efficient if each subject is at most monitored once and the working model contains the truth. In general, it is shown that the one-step estimator optimally uses the surrogate information if the working model contains the truth. It is not optimal in using the interval information provided by the current status indicators at the monitoring times, but simulations in Peterson, van der Laan (1997) show that the efficiency loss is small.
Resumo:
Investigators interested in whether a disease aggregates in families often collect case-control family data, which consist of disease status and covariate information for families selected via case or control probands. Here, we focus on the use of case-control family data to investigate the relative contributions to the disease of additive genetic effects (A), shared family environment (C), and unique environment (E). To this end, we describe a ACE model for binary family data and then introduce an approach to fitting the model to case-control family data. The structural equation model, which has been described previously, combines a general-family extension of the classic ACE twin model with a (possibly covariate-specific) liability-threshold model for binary outcomes. Our likelihood-based approach to fitting involves conditioning on the proband’s disease status, as well as setting prevalence equal to a pre-specified value that can be estimated from the data themselves if necessary. Simulation experiments suggest that our approach to fitting yields approximately unbiased estimates of the A, C, and E variance components, provided that certain commonly-made assumptions hold. These assumptions include: the usual assumptions for the classic ACE and liability-threshold models; assumptions about shared family environment for relative pairs; and assumptions about the case-control family sampling, including single ascertainment. When our approach is used to fit the ACE model to Austrian case-control family data on depression, the resulting estimate of heritability is very similar to those from previous analyses of twin data.
Resumo:
BACKGROUND: Short-acting agents for neuromuscular block (NMB) require frequent dosing adjustments for individual patient's needs. In this study, we verified a new closed-loop controller for mivacurium dosing in clinical trials. METHODS: Fifteen patients were studied. T1% measured with electromyography was used as input signal for the model-based controller. After induction of propofol/opiate anaesthesia, stabilization of baseline electromyography signal was awaited and a bolus of 0.3 mg kg-1 mivacurium was then administered to facilitate endotracheal intubation. Closed-loop infusion was started thereafter, targeting a neuromuscular block of 90%. Setpoint deviation, the number of manual interventions and surgeon's complaints were recorded. Drug use and its variability between and within patients were evaluated. RESULTS: Median time of closed-loop control for the 11 patients included in the data processing was 135 [89-336] min (median [range]). Four patients had to be excluded because of sensor problems. Mean absolute deviation from setpoint was 1.8 +/- 0.9 T1%. Neither manual interventions nor complaints from the surgeons were recorded. Mean necessary mivacurium infusion rate was 7.0 +/- 2.2 microg kg-1 min-1. Intrapatient variability of mean infusion rates over 30-min interval showed high differences up to a factor of 1.8 between highest and lowest requirement in the same patient. CONCLUSIONS: Neuromuscular block can precisely be controlled with mivacurium using our model-based controller. The amount of mivacurium needed to maintain T1% at defined constant levels differed largely between and within patients. Closed-loop control seems therefore advantageous to automatically maintain neuromuscular block at constant levels.
Resumo:
BACKGROUND: The objective of this study was to compare cycle control, cycle-related characteristics and bodyweight effects of NuvaRing with those of a combined oral contraceptive (COC) containing 30 microg of ethinyl estradiol and 3 mg of drospirenone. METHODS: A randomized, multicentre, open-label trial in which 983 women were treated (intent-to-treat population) with NuvaRing or the COC for 13 cycles. RESULTS: Breakthrough bleeding or spotting during cycles 2-13 was in general less frequent with NuvaRing than that with the COC (4.7-10.4%) and showed a statistically significant odds ratio of 0.61 (95% confidence interval: 0.46, 0.80) with longitudinal analysis. Intended bleeding was significantly better for all cycles with NuvaRing (55.2-68.5%) than that with the COC (35.6-56.6%) (P < 0.01). Changes from baseline in mean bodyweight and body composition parameters were relatively small for both groups with no notable between-group differences. CONCLUSION: NuvaRing was associated with better cycle control than the COC, and there was no clinically relevant difference between the two groups in bodyweight.
Resumo:
BACKGROUND: Periodontitis has been identified as a potential risk factor in cardiovascular diseases. It is possible that the stimulation of host responses to oral infections may result in vascular damage and the inducement of blood clotting. The aim of this study was to assess the role of periodontal infection and bacterial burden as an explanatory variable to the activation of the inflammatory process leading to acute coronary syndrome (ACS). METHODS: A total of 161 consecutive surviving cases admitted with a diagnosis of ACS and 161 control subjects, matched with cases according to their gender, socioeconomic level, and smoking status, were studied. Serum white blood cell (WBC) counts, high- and low-density lipoprotein (HDL/LDL) levels, high-sensitivity C-reactive protein (hsC-rp) levels, and clinical periodontal routine parameters were studied. The subgingival pathogens were assayed by the checkerboard DNA-DNA hybridization method. RESULTS: Total oral bacterial load was higher in the subjects with ACS (mean difference: 17.4x10(5); SD: 10.8; 95% confidence interval [CI]: 4.2 to 17.4; P<0.001), and significant for 26 of 40 species including Porphyromonas gingivalis, Tannerella forsythensis, and Treponema denticola. Serum WBC counts, hsC-rp levels, Streptococcus intermedius, and Streptococcus sanguis, were explanatory factors to acute coronary syndrome status (Nagelkerke r2=0.49). CONCLUSION: The oral bacterial load of S. intermedius, S. sanguis, Streptococcus anginosus, T. forsythensis, T. denticola, and P. gingivalis may be concomitant risk factors in the development of ACS.
Resumo:
Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately
Resumo:
In the anti-saccade paradigm, subjects are instructed not to make a reflexive saccade to an appearing lateral target but to make an intentional saccade to the opposite side instead. The inhibition of reflexive saccade triggering is under the control of the dorsolateral prefrontal cortex (DLPFC). The critical time interval at which this inhibition takes place during the paradigm, however, is not exactly known. In the present study, we used single-pulse transcranial magnetic stimulation (TMS) to interfere with DLPFC function in 15 healthy subjects. TMS was applied over the right DLPFC either 100 ms before the onset of the visual target (i.e. -100 ms), at target onset (i.e. 0 ms) or 100 ms after target onset (i.e. +100 ms). Stimulation 100 ms before target onset significantly increased the percentage of anti-saccade errors to both sides, while stimulation at, or after, target onset had no significant effect. All three stimulation conditions had no significant influence on saccade latency of correct or erroneous anti-saccades. These findings show that the critical time interval at which the DLPFC controls the suppression of a reflexive saccade in the anti-saccade paradigm is before target onset. In addition, the results suggest the view that the triggering of correct anti-saccades is not under direct control of the DLPFC.
Resumo:
BACKGROUND: Many epidemiological studies indicate a positive correlation between cataract surgery and the subsequent progression of age-related macular degeneration (AMD). Such a correlation would have far-reaching consequences. However, in epidemiological studies it is difficult to determine the significance of a single risk factor, such as cataract surgery. PATIENTS AND METHODS: We performed a retrospective case-control study of patients with new onset exudative age-related macular degeneration to determine if cataract surgery was a predisposing factor. A total of 1496 eyes were included in the study: 984 cases with new onset of exudative AMD and 512 control eyes with early signs of age-related maculopathy. Lens status (phakic or pseudophakic) was determined for each eye. RESULTS: There was no significant difference in lens status between study and control group (227/984 [23.1 %] vs. 112/512 [21.8 %] pseudophakic, p = 0.6487; OR = 1.071; 95 % CI = 0.8284-1.384). In cases with bilateral pseudophakia (n = 64) no statistically significant difference of the interval between cataract surgery in either eye and the onset of exudative AMD in the study eye was found (225.9 +/- 170.4 vs. 209.9 +/- 158.2 weeks, p = 0.27). CONCLUSIONS: Our results provide evidence that cataract surgery is not a major risk factor for the development of exudative AMD.
Resumo:
BACKGROUND: Duplications and deletions in the human genome can cause disease or predispose persons to disease. Advances in technologies to detect these changes allow for the routine identification of submicroscopic imbalances in large numbers of patients. METHODS: We tested for the presence of microdeletions and microduplications at a specific region of chromosome 1q21.1 in two groups of patients with unexplained mental retardation, autism, or congenital anomalies and in unaffected persons. RESULTS: We identified 25 persons with a recurrent 1.35-Mb deletion within 1q21.1 from screening 5218 patients. The microdeletions had arisen de novo in eight patients, were inherited from a mildly affected parent in three patients, were inherited from an apparently unaffected parent in six patients, and were of unknown inheritance in eight patients. The deletion was absent in a series of 4737 control persons (P=1.1x10(-7)). We found considerable variability in the level of phenotypic expression of the microdeletion; phenotypes included mild-to-moderate mental retardation, microcephaly, cardiac abnormalities, and cataracts. The reciprocal duplication was enriched in nine children with mental retardation or autism spectrum disorder and other variable features (P=0.02). We identified three deletions and three duplications of the 1q21.1 region in an independent sample of 788 patients with mental retardation and congenital anomalies. CONCLUSIONS: We have identified recurrent molecular lesions that elude syndromic classification and whose disease manifestations must be considered in a broader context of development as opposed to being assigned to a specific disease. Clinical diagnosis in patients with these lesions may be most readily achieved on the basis of genotype rather than phenotype.
Resumo:
In this dissertation, the problem of creating effective large scale Adaptive Optics (AO) systems control algorithms for the new generation of giant optical telescopes is addressed. The effectiveness of AO control algorithms is evaluated in several respects, such as computational complexity, compensation error rejection and robustness, i.e. reasonable insensitivity to the system imperfections. The results of this research are summarized as follows: 1. Robustness study of Sparse Minimum Variance Pseudo Open Loop Controller (POLC) for multi-conjugate adaptive optics (MCAO). The AO system model that accounts for various system errors has been developed and applied to check the stability and performance of the POLC algorithm, which is one of the most promising approaches for the future AO systems control. It has been shown through numerous simulations that, despite the initial assumption that the exact system knowledge is necessary for the POLC algorithm to work, it is highly robust against various system errors. 2. Predictive Kalman Filter (KF) and Minimum Variance (MV) control algorithms for MCAO. The limiting performance of the non-dynamic Minimum Variance and dynamic KF-based phase estimation algorithms for MCAO has been evaluated by doing Monte-Carlo simulations. The validity of simple near-Markov autoregressive phase dynamics model has been tested and its adequate ability to predict the turbulence phase has been demonstrated both for single- and multiconjugate AO. It has also been shown that there is no performance improvement gained from the use of the more complicated KF approach in comparison to the much simpler MV algorithm in the case of MCAO. 3. Sparse predictive Minimum Variance control algorithm for MCAO. The temporal prediction stage has been added to the non-dynamic MV control algorithm in such a way that no additional computational burden is introduced. It has been confirmed through simulations that the use of phase prediction makes it possible to significantly reduce the system sampling rate and thus overall computational complexity while both maintaining the system stable and effectively compensating for the measurement and control latencies.
Resumo:
Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.
Resumo:
Electrical Power Assisted Steering system (EPAS) will likely be used on future automotive power steering systems. The sinusoidal brushless DC (BLDC) motor has been identified as one of the most suitable actuators for the EPAS application. Motor characteristic variations, which can be indicated by variations of the motor parameters such as the coil resistance and the torque constant, directly impart inaccuracies in the control scheme based on the nominal values of parameters and thus the whole system performance suffers. The motor controller must address the time-varying motor characteristics problem and maintain the performance in its long service life. In this dissertation, four adaptive control algorithms for brushless DC (BLDC) motors are explored. The first algorithm engages a simplified inverse dq-coordinate dynamics controller and solves for the parameter errors with the q-axis current (iq) feedback from several past sampling steps. The controller parameter values are updated by slow integration of the parameter errors. Improvement such as dynamic approximation, speed approximation and Gram-Schmidt orthonormalization are discussed for better estimation performance. The second algorithm is proposed to use both the d-axis current (id) and the q-axis current (iq) feedback for parameter estimation since id always accompanies iq. Stochastic conditions for unbiased estimation are shown through Monte Carlo simulations. Study of the first two adaptive algorithms indicates that the parameter estimation performance can be achieved by using more history data. The Extended Kalman Filter (EKF), a representative recursive estimation algorithm, is then investigated for the BLDC motor application. Simulation results validated the superior estimation performance with the EKF. However, the computation complexity and stability may be barriers for practical implementation of the EKF. The fourth algorithm is a model reference adaptive control (MRAC) that utilizes the desired motor characteristics as a reference model. Its stability is guaranteed by Lyapunov’s direct method. Simulation shows superior performance in terms of the convergence speed and current tracking. These algorithms are compared in closed loop simulation with an EPAS model and a motor speed control application. The MRAC is identified as the most promising candidate controller because of its combination of superior performance and low computational complexity. A BLDC motor controller developed with the dq-coordinate model cannot be implemented without several supplemental functions such as the coordinate transformation and a DC-to-AC current encoding scheme. A quasi-physical BLDC motor model is developed to study the practical implementation issues of the dq-coordinate control strategy, such as the initialization and rotor angle transducer resolution. This model can also be beneficial during first stage development in automotive BLDC motor applications.
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
To evaluate strategies used to select cases and controls and how reported odds ratios are interpreted, the authors examined 150 case-control studies published in leading general medicine, epidemiology, and clinical specialist journals from 2001 to 2007. Most of the studies (125/150; 83%) were based on incident cases; among these, the source population was mostly dynamic (102/125; 82%). A minority (23/125; 18%) sampled from a fixed cohort. Among studies with incident cases, 105 (84%) could interpret the odds ratio as a rate ratio. Fifty-seven (46% of 125) required the source population to be stable for such interpretation, while the remaining 48 (38% of 125) did not need any assumptions because of matching on time or concurrent sampling. Another 17 (14% of 125) studies with incident cases could interpret the odds ratio as a risk ratio, with 16 of them requiring the rare disease assumption for this interpretation. The rare disease assumption was discussed in 4 studies but was not relevant to any of them. No investigators mentioned the need for a stable population. The authors conclude that in current case-control research, a stable exposure distribution is much more frequently needed to interpret odds ratios than the rare disease assumption. At present, investigators conducting case-control studies rarely discuss what their odds ratios estimate.