97 resultados para Event Procedure
Statistical evaluation of the fixed concentration procedure for acute inhalation toxicity assessment
Resumo:
The conventional method for the assessment of acute inhalation toxicity (OECD Test Guideline 403, 1981) uses death of animals as an endpoint to identify the median lethal concentration (LC50). A new OECD Testing Guideline called the Fixed Concentration Procedure (FCP) is being prepared to provide an alternative to Test Guideline 403. Unlike Test Guideline 403, the FCP does not provide a point estimate of the LC50, but aims to identify an airborne exposure level that causes clear signs of nonlethal toxicity. This is then used to assign classification according to the new Globally Harmonized System of Classification and Labelling scheme (GHS). The FCP has been validated using statistical simulation rather than byin vivo testing. The statistical simulation approach predicts the GHS classification outcome and the numbers of deaths and animals used in the test for imaginary substances with a range of LC50 values and dose response curve slopes. This paper describes the FCP and reports the results from the statistical simulation study assessing its properties. It is shown that the procedure will be completed with considerably less death and suffering than Test Guideline 403, and will classify substances either in the same or a more stringent GHS class than that assigned on the basis of the LC50 value.
Resumo:
The fixed-dose procedure (FDP) was introduced as OECD Test Guideline 420 in 1992, as an alternative to the conventional median lethal dose (LD50) test for the assessment of acute oral toxicity (OECD Test Guideline 401). The FDP uses fewer animals and causes less suffering than the conventional test, while providing information on the acute toxicity to allow substances to be ranked according to the EU hazard classification system. Recently the FDP has been revised, with the aim of providing further reductions and refinements, and classification according to the criteria of the Globally Harmonized Hazard Classification and Labelling scheme (GHS). This paper describes the revised FDP and analyses its properties, as determined by a statistical modelling approach. The analysis shows that the revised FDP classifies substances for acute oral toxicity generally in the same, or a more stringent, hazard class as that based on the LD50 value, according to either the GHS or the EU classification scheme. The likelihood of achieving the same classification is greatest for substances with a steep dose-response curve and median toxic dose (TD50) close to the LD50. The revised FDP usually requires five or six animals with two or fewer dying as a result of treatment in most cases.
Resumo:
Observation of adverse drug reactions during drug development can cause closure of the whole programme. However, if association between the genotype and the risk of an adverse event is discovered, then it might suffice to exclude patients of certain genotypes from future recruitment. Various sequential and non-sequential procedures are available to identify an association between the whole genome, or at least a portion of it, and the incidence of adverse events. In this paper we start with a suspected association between the genotype and the risk of an adverse event and suppose that the genetic subgroups with elevated risk can be identified. Our focus is determination of whether the patients identified as being at risk should be excluded from further studies of the drug. We propose using a utility function to? determine the appropriate action, taking into account the relative costs of suffering an adverse reaction and of failing to alleviate the patient's disease. Two illustrative examples are presented, one comparing patients who suffer from an adverse event with contemporary patients who do not, and the other making use of a reference control group. We also illustrate two classification methods, LASSO and CART, for identifying patients at risk, but we stress that any appropriate classification method could be used in conjunction with the proposed utility function. Our emphasis is on determining the action to take rather than on providing definitive evidence of an association. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Nested clade phylogeographic analysis (NCPA) is a popular method for reconstructing the demographic history of spatially distributed populations from genetic data. Although some parts of the analysis are automated, there is no unique and widely followed algorithm for doing this in its entirety, beginning with the data, and ending with the inferences drawn from the data. This article describes a method that automates NCPA, thereby providing a framework for replicating analyses in an objective way. To do so, a number of decisions need to be made so that the automated implementation is representative of previous analyses. We review how the NCPA procedure has evolved since its inception and conclude that there is scope for some variability in the manual application of NCPA. We apply the automated software to three published datasets previously analyzed manually and replicate many details of the manual analyses, suggesting that the current algorithm is representative of how a typical user will perform NCPA. We simulate a large number of replicate datasets for geographically distributed, but entirely random-mating, populations. These are then analyzed using the automated NCPA algorithm. Results indicate that NCPA tends to give a high frequency of false positives. In our simulations we observe that 14% of the clades give a conclusive inference that a demographic event has occurred, and that 75% of the datasets have at least one clade that gives such an inference. This is mainly due to the generation of multiple statistics per clade, of which only one is required to be significant to apply the inference key. We survey the inferences that have been made in recent publications and show that the most commonly inferred processes (restricted gene flow with isolation by distance and contiguous range expansion) are those that are commonly inferred in our simulations. However, published datasets typically yield a richer set of inferences with NCPA than obtained in our random-mating simulations, and further testing of NCPA with models of structured populations is necessary to examine its accuracy.
Resumo:
Bayesian decision procedures have already been proposed for and implemented in Phase I dose-escalation studies in healthy volunteers. The procedures have been based on pharmacokinetic responses reflecting the concentration of the drug in blood plasma and are conducted to learn about the dose-response relationship while avoiding excessive concentrations. However, in many dose-escalation studies, pharmacodynamic endpoints such as heart rate or blood pressure are observed, and it is these that should be used to control dose-escalation. These endpoints introduce additional complexity into the modeling of the problem relative to pharmacokinetic responses. Firstly, there are responses available following placebo administrations. Secondly, the pharmacodynamic responses are related directly to measurable plasma concentrations, which in turn are related to dose. Motivated by experience of data from a real study conducted in a conventional manner, this paper presents and evaluates a Bayesian procedure devised for the simultaneous monitoring of pharmacodynamic and pharmacokinetic responses. Account is also taken of the incidence of adverse events. Following logarithmic transformations, a linear model is used to relate dose to the pharmacokinetic endpoint and a quadratic model to relate the latter to the pharmacodynamic endpoint. A logistic model is used to relate the pharmacokinetic endpoint to the risk of an adverse event.
Resumo:
Recent rapid developments in biological analysis, medical diagnosis, pharmaceutical industry, and environmental control fuel the urgent need for recognition of particular DNA sequences from samples. Currently, DNA detection techniques use radiochemical, enzymatic, fluorescent, or electrochemiluminescent methods; however, these techniques require costly labeled DNA and highly skilled and cumbersome procedure, which prohibit any in-situ monitoring. Here, we report that hybridization of surface-immobilized single-stranded oligonucleotide on praseodymium oxide (evaluated as a biosensor surface for the first time) with complimentary strands in solution provokes a significant shift of electrical impedance curve. This shift is attributed to a change in electrical characteristics through modification of surface charge of the underlying modified praseodymium oxide upon hybridization with the complementary oligonucelotide strand. On the other hand, using a noncomplementary single strand in solution does not create an equivalent change in the impedance value. This result clearly suggests that a new and simple electrochemical technique based on the change in electrical properties of the modified praseodymium oxide semiconductor surface upon recognition and transduction of a biological event without using labeled species is revealed.
Resumo:
Competitive Dialogue (CD) is a new contract award procedure of the European Community (EC). It is set out in Article 29 of the 'Public Sector Directive' 2004/18/EC. Over the last decades, projects were becoming more and more complex, and the existing EC procedures were no longer suitable to procure those projects. The call for a new procedure resulted in CD. This paper describes how the Directive has been implemented into the laws of two member states: the UK and the Netherlands. In order to implement the Directive, both lawmakers have set up a new and distinct piece of legislation. In each case, large parts of the Directive’s content have been repeated ‘word for word’; only minor parts have been reworded and/or restructured. In the next part of the paper, the CD procedure is examined in different respects. First, an overview is given on the different EC contract award procedures (open, restricted, negotiated, CD) and awarding methods (lowest price and Most Economically Advantageous Tender, MEAT). Second, the applicability of CD is described: Among other limitations, CD can only be applied to public contracts for works, supplies, and services, and this scope of application is further restricted by the exclusion of certain contract types. One such exclusion concerns services concessions. This means that PPP contracts which are set up as services concessions cannot be awarded by CD. The last two parts of the paper pertain to the main features of the CD procedure – from ‘contract notice’ to ‘contract award’ – and the advantages and disadvantages of the procedure. One advantage is that the dialogue allows the complexity of the project to be disentangled and clarified. Other advantages are the stimulation of innovation and creativity. These advantages are set against the procedure’s disadvantages, which include high transaction costs and a perceived hindrance of innovation (due to an ambiguity between transparency and fair competition). It is concluded that all advantages and disadvantages are related to one of three elements: communication, competition, and/or structure of the procedure. Further research is needed to find out how these elements are related.
Resumo:
Event-related functional magnetic resonance imaging (efMRI) has emerged as a powerful technique for detecting brains' responses to presented stimuli. A primary goal in efMRI data analysis is to estimate the Hemodynamic Response Function (HRF) and to locate activated regions in human brains when specific tasks are performed. This paper develops new methodologies that are important improvements not only to parametric but also to nonparametric estimation and hypothesis testing of the HRF. First, an effective and computationally fast scheme for estimating the error covariance matrix for efMRI is proposed. Second, methodologies for estimation and hypothesis testing of the HRF are developed. Simulations support the effectiveness of our proposed methods. When applied to an efMRI dataset from an emotional control study, our method reveals more meaningful findings than the popular methods offered by AFNI and FSL. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Identifying 2 target stimuli in a rapid stream of visual symbols is much easier if the 2nd target appears immediately after the 1st target (i.e., at Lag 1) than if distractor stimuli intervene. As this phenomenon comes with a strong tendency to confuse the order of the targets, it seems to be due to the integration of both targets into the same attentional episode or object file. The authors investigated the degree to which people can control the temporal extension of their (episodic) integration windows by manipulating the expectations participants had with regard to the time available for target processing. As predicted, expecting more time to process increased the number of order confusions at Lag 1. This was true for between-subjects and within-subjects (trial-to-trial) manipulations, suggesting that integration windows can be adapted actively and rather quickly.
Resumo:
We investigated whether it is possible to control the temporal window of attention used to rapidly integrate visual information. To study the underlying neural mechanisms, we recorded ERPs in an attentional blink task, known to elicit Lag-1 sparing. Lag-1 sparing fosters joint integration of the two targets, evidenced by increased order errors. Short versus long integration windows were induced by showing participants mostly fast or slow stimuli. Participants expecting slow speed used a longer integration window, increasing joint integration. Difference waves showed an early (200 ms post-T2) negative and a late positive modulation (390 ms) in the fast group, but not in the slow group. The modulations suggest the creation of a separate event for T2, which is not needed in the slow group, where targets were often jointly integrated. This suggests that attention can be guided by global expectations of presentation speed within tens of milliseconds.
Resumo:
Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.