30 resultados para Experimental Methods.
Resumo:
In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.
Experimental comparison of the comprehensibility of a Z specification and its implementation in Java
Resumo:
Comprehensibility is often raised as a problem with formal notations, yet formal methods practitioners dispute this. In a survey, one interviewee said 'formal specifications are no more difficult to understand than code'. Measurement of comprehension is necessarily comparative and a useful comparison for a specification is against its implementation. Practitioners have an intuitive feel for the comprehension of code. A quantified comparison will transfer this feeling to formal specifications. We performed an experiment to compare the comprehension of a Z specification with that of its implementation in Java. The results indicate there is little difference in comprehensibility between the two. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Objectives. Theoretic modeling and experimental studies suggest that functional electrical stimulation (FES) can improve trunk balance in spinal cord injured subjects. This can have a positive impact on daily life, increasing the volume of bimanual workspace, improving sitting posture, and wheelchair propulsion. A closed loop controller for the stimulation is desirable, as it can potentially decrease muscle fatigue and offer better rejection to disturbances. This paper proposes a biomechanical model of the human trunk, and a procedure for its identification, to be used for the future development of FES controllers. The advantage over previous models resides in the simplicity of the solution proposed, which makes it possible to identify the model just before a stimulation session ( taking into account the variability of the muscle response to the FES). Materials and Methods. The structure of the model is based on previous research on FES and muscle physiology. Some details could not be inferred from previous studies, and were determined from experimental data. Experiments with a paraplegic volunteer were conducted in order to measure the moments exerted by the trunk-passive tissues and artificially stimulated muscles. Data for model identification and validation also were collected. Results. Using the proposed structure and identification procedure, the model could adequately reproduce the moments exerted during the experiments. The study reveals that the stimulated trunk extensors can exert maximal moment when the trunk is in the upright position. In contrast, previous studies show that able-bodied subjects can exert maximal trunk extension when flexed forward. Conclusions. The proposed model and identification procedure are a successful first step toward the development of a model-based controller for trunk FES. The model also gives information on the trunk in unique conditions, normally not observable in able-bodied subjects (ie, subject only to extensor muscles contraction).
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
Can human social cognitive processes and social motives be grasped by the methods of experimental economics? Experimental studies of strategic cognition and social preferences contribute to our understanding of the social aspects of economic decisions making. Yet, papers in this issue argue that the social aspects of decision-making introduce several difficulties for interpreting the results of economic experiments. In particular, the laboratory is itself a social context, and in many respects a rather distinctive one, which raises questions of external validity.
Resumo:
Modelling the interaction of terahertz(THz) radiation with biological tissueposes many interesting problems. THzradiation is neither obviously described byan electric field distribution or anensemble of photons and biological tissueis an inhomogeneous medium with anelectronic permittivity that is bothspatially and frequency dependent making ita complex system to model.A three-layer system of parallel-sidedslabs has been used as the system throughwhich the passage of THz radiation has beensimulated. Two modelling approaches havebeen developed a thin film matrix model anda Monte Carlo model. The source data foreach of these methods, taken at the sametime as the data recorded to experimentallyverify them, was a THz spectrum that hadpassed though air only.Experimental verification of these twomodels was carried out using athree-layered in vitro phantom. Simulatedtransmission spectrum data was compared toexperimental transmission spectrum datafirst to determine and then to compare theaccuracy of the two methods. Goodagreement was found, with typical resultshaving a correlation coefficient of 0.90for the thin film matrix model and 0.78 forthe Monte Carlo model over the full THzspectrum. Further work is underway toimprove the models above 1 THz.
Resumo:
Summary 1. In recent decades there have been population declines of many UK bird species, which have become the focus of intense research and debate. Recently, as the populations of potential predators have increased there is concern that increased rates of predation may be contributing to the declines. In this review, we assess the methodologies behind the current published science on the impacts of predators on avian prey in the UK. 2. We identified suitable studies, classified these according to study design (experimental ⁄observational) and assessed the quantity and quality of the data upon which any variation in predation rates was inferred. We then explored whether the underlying study methodology had implications for study outcome. 3. We reviewed 32 published studies and found that typically observational studies comprehensively monitored significantly fewer predator species than experimental studies. Data for a difference in predator abundance from targeted (i.e. bespoke) census techniques were available for less than half of the 32 predator species studied. 4. The probability of a study detecting an impact on prey abundance was strongly, positively related to the quality and quantity of data upon which the gradient in predation rates was inferred. 5. The findings suggest that if a study is based on good quality abundance data for a range of predator species then it is more likely to detect an effect than if it relies on opportunistic data for a smaller number of predators. 6. We recommend that the findings from studies which use opportunistic data, for a limited number of predator species, should be treated with caution and that future studies employ bespoke census techniques to monitor predator abundance for an appropriate suite of predators.
Resumo:
The physical and empirical relationships used by microphysics schemes to control the rate at which vapor is transferred to ice crystals growing in supercooled clouds are compared with laboratory data to evaluate the realism of various model formulations. Ice crystal growth rates predicted from capacitance theory are compared with measurements from three independent laboratory studies. When the growth is diffusion- limited, the predicted growth rates are consistent with the measured values to within about 20% in 14 of the experiments analyzed, over the temperature range −2.5° to −22°C. Only two experiments showed significant disagreement with theory (growth rate overestimated by about 30%–40% at −3.7° and −10.6°C). Growth predictions using various ventilation factor parameterizations were also calculated and compared with supercooled wind tunnel data. It was found that neither of the standard parameterizations used for ventilation adequately described both needle and dendrite growth; however, by choosing habit-specific ventilation factors from previous numerical work it was possible to match the experimental data in both regimes. The relationships between crystal mass, capacitance, and fall velocity were investigated based on the laboratory data. It was found that for a given crystal size the capacitance was significantly overestimated by two of the microphysics schemes considered here, yet for a given crystal mass the growth rate was underestimated by those same schemes because of unrealistic mass/size assumptions. The fall speed for a given capacitance (controlling the residence time of a crystal in the supercooled layer relative to its effectiveness as a vapor sink, and the relative importance of ventilation effects) was found to be overpredicted by all the schemes in which fallout is permitted, implying that the modeled crystals reside for too short a time within the cloud layer and that the parameterized ventilation effect is too strong.
Resumo:
Objectives: To determine the efficacy of enrofloxacin (Baytril) in chickens in eradicating three different resistance phenotypes of Salmonella enterica and to examine the resistance mechanisms of resulting mutants. Methods: In two separate replicate experiments (I and 11), three strains of Salmonella enterica serovar Typhimurium DT104 [strain A, fully antibiotic-sensitive strain; strain B, isogenic multiple antibiotic-resistant (MAR) derivative of A; strain C, veterinary penta-resistant phenotype strain containing GyrA Phe-83], were inoculated into day-old chicks at similar to 10(3) Cfu/bird. At day 10, groups of chicks (n =10) were given either enrofloxacin at 50 ppm in their drinking water for 5 days or water alone (control). Caecal contents were monitored for presence of Salmonella and colonies were replica plated to media containing antibiotics or overlaid with cyclohexane to determine the proportion of isolates with reduced susceptibility. The MICs of antibiotics and cyclohexane tolerance were determined for selected isolates from the chicks. Mutations in topoisomerase genes were examined by DHPLC and expression of marA, soxS, acrB, acrD and acrF by RT-PCR. Results: In experiment 1, but not 11, enrofloxacin significantly reduced the numbers of strain A compared with the untreated control group. In experiment 11, but not 1, enrofloxacin significantly reduced the numbers of strain B. Shedding of strain C was unaffected by enrofloxacin treatment. Birds infected with strains A and B gave rise to isolates with decreased fluoroquinolone susceptibility. Isolates derived from strain A or B requiring > 128 mg/L nalidixic acid for inhibition contained GyrA Asn-82 or Phe-83. Isolates inhibited by 16 mg/L nalidixic acid were also less susceptible to antibiotics of other chemical classes and became cyclohexane-tolerant (e.g. MAR). Conclusions: These studies demonstrate that recommended enrofloxacin treatment of chicks rapidly selects for strains with reduced fluoroquinolone susceptibility from fully sensitive and MAR strains. It can also select for MAR isolates.
Resumo:
Crystallization must occur in honey in order to produce set or creamed honey; however, the process must occur in a controlled manner in order to obtain an acceptable product. As a consequence, reliable methods are needed to measure the crystal content of honey (φ expressed as kg crystal per kg honey), which can also be implemented with relative ease in industrial production facilities. Unfortunately, suitable methods do not currently exist. This article reports on the development of 2 independent offline methods to measure the crystal content in honey based on differential scanning calorimetry and high-performance liquid chromatography. The 2 methods gave highly consistent results on the basis of paired t-test involving 143 experimental points (P > 0.05, r**2 = 0.99). The crystal content also correlated with the relative viscosity, defined as the ratio of the viscosity of crystal containing honey to that of the same honey when all crystals are dissolved, giving the following correlation: μr = 1 + 1398.8∅**2.318. This correlation can be used to estimate the crystal content of honey in industrial production facilities. The crystal growth rate at a temperature of 14 ◦C—the normal crystallization temperature used in practice—was linear, and the growth rate also increased with the total glucose content in the honey.
Resumo:
Background: In mammals, early-life environmental variations appear to affect microbial colonization and therefore competent immune development, and exposure to farm environments in infants has been inversely correlated with allergy development. Modelling these effects using manipulation of neonatal rodents is difficult due to their dependency on the mother, but the relatively independent piglet is increasingly identified as a valuable translational model for humans. This study was designed to correlate immune regulation in piglets with early-life environment. Methods: Piglets were nursed by their mother on a commercial farm, while isolatorreared siblings were formula fed. Fluorescence immunohistology was used to quantify T-reg and effector T-cell populations in the intestinal lamina propria and the systemic response to food proteins was quantified by capture ELISA. Results: There was more CD4+ and CD4+CD25+ effector T-cell staining in the intestinal mucosa of the isolator-reared piglets compared with their farm-reared counterparts. In contrast, these isolator-reared piglets had a significantly reduced CD4+CD25+Foxp3+ regulatory T-cell population compared to farm-reared littermates, resulting in a significantly higher T-reg-to-effector ratio in the farm animals. Consistent with these findings, isolator-reared piglets had an increased serum IgG anti-soya response to novel dietary soya protein relative to farm-reared piglets. Conclusion: Here, we provide the first direct evidence, derived from intervention, that components of the early-life environment present on farms profoundly affects both local development of regulatory components of the mucosal immune system and immune responses to food proteins at weaning. We propose that neonatal piglets provide a tractable model which allows maternal and treatment effects to be statistically separated.
Resumo:
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.
Resumo:
The hippocampus plays a pivotal role in the formation and consolidation of episodic memories, and in spatial orientation. Historically, the adult hippocampus has been viewed as a very static anatomical region of the mammalian brain. However, recent findings have demonstrated that the dentate gyrus of the hippocampus is an area of tremendous plasticity in adults, involving not only modifications of existing neuronal circuits, but also adult neurogenesis. This plasticity is regulated by complex transcriptional networks, in which the transcription factor NF-κB plays a prominent role. To study and manipulate adult neurogenesis, a transgenic mouse model for forebrain-specific neuronal inhibition of NF-κB activity can be used. In this study, methods are described for the analysis of NF-κB-dependent neurogenesis, including its structural aspects, neuronal apoptosis and progenitor proliferation, and cognitive significance, which was specifically assessed via a dentate gyrus (DG)-dependent behavioral test, the spatial pattern separation-Barnes maze (SPS-BM). The SPS-BM protocol could be simply adapted for use with other transgenic animal models designed to assess the influence of particular genes on adult hippocampal neurogenesis. Furthermore, SPS-BM could be used in other experimental settings aimed at investigating and manipulating DG-dependent learning, for example, using pharmacological agents.
Resumo:
Intracellular reactive oxygen species (ROS) production is essential to normal cell function. However, excessive ROS production causes oxidative damage and cell death. Many pharmacological compounds exert their effects on cell cycle progression by changing intracellular redox state and in many cases cause oxidative damage leading to drug cytotoxicity. Appropriate measurement of intracellular ROS levels during cell cycle progression is therefore crucial in understanding redox-regulation of cell function and drug toxicity and for the development of new drugs. However, due to the extremely short half-life of ROS, measuring the changes in intracellular ROS levels during a particular phase of cell cycle for drug intervention can be challenging. In this article, we have provided updated information on the rationale, the applications, the advantages and limitations of common methods for screening drug effects on intracellular ROS production linked to cell cycle study. Our aim is to facilitate biomedical scientists and researchers in the pharmaceutical industry in choosing or developing specific experimental regimens to suit their research needs.
Resumo:
Experimental philosophy brings empirical methods to philosophy. These methods are used to probe how people think about philosophically interesting things such as knowledge, morality, freedom, etc. This paper explores the contribution that qualitative methods have to make in this enterprise. I argue that qualitative methods have the potential to make a much greater contribution than they have so far. Along the way, I acknowledge a few types of resistance that proponents of qualitative methods in experimental philosophy might encounter, and provide reasons to think they are ill-founded.