948 resultados para Mathematical ability Testing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this action research study of my sixth grade mathematics class, I investigated the influence a change in my questioning tactics would have on students’ ability to determine answer reasonability to mathematics problems. During the course of my research, students were asked to explain their problem solving and solutions. Students, amongst themselves, discussed solutions given by their peers and the reasonability of those solutions. They also completed daily questionnaires that inquired about my questioning practices, and 10 students were randomly chosen to be interviewed regarding their problem solving strategies. I discovered that by placing more emphasis on the process rather than the product, students became used to questioning problem solving strategies and explaining their reasoning. I plan to maintain this practice in the future while incorporating more visual and textual explanations to support verbal explanations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rationale Mannitol dry powder (MDP) challenge is an indirect bronchial provocation test, which is well studied in adults but not established for children. Objective We compared feasibility, validity, and clinical significance of MDP challenge with exercise testing in children in a clinical setting. Methods Children aged 6–16 years, referred to two respiratory outpatient clinics for possible asthma diagnosis, underwent standardized exercise testing followed within a week by an MDP challenge (Aridol™, Pharmaxis, Australia). Agreement between the two challenge tests using Cohen's kappa and receiving operating characteristic (ROC) curves was compared. Results One hundred eleven children performed both challenge tests. Twelve children were excluded due to exhaustion or insufficient cooperation (11 at the exercise test, 1 at the MDP challenge), leaving 99 children (mean ± SD age 11.5 ± 2.7 years) for analysis. MDP tests were well accepted, with minor side effects and a shorter duration than exercise tests. The MDP challenge was positive in 29 children (29%), the exercise test in 21 (21%). Both tests were concordant in 83 children (84%), with moderate agreement (κ = 0.58, 95% CI 0.39–0.76). Positive and negative predictive values of the MDP challenge for exercise-induced bronchoconstriction were 68% and 89%. The overall ability of MDP challenge to separate children with or without positive exercise tests was good (area under the ROC curve 0.83). Conclusions MDP challenge test is feasible in children and is a suitable alternative for bronchial challenge testing in childhood. Pediatr. Pulmonol. 2011; 46:842–848. © 2011 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past few decades, integrated circuits have become a major part of everyday life. Every circuit that is created needs to be tested for faults so faulty circuits are not sent to end-users. The creation of these tests is time consuming, costly and difficult to perform on larger circuits. This research presents a novel method for fault detection and test pattern reduction in integrated circuitry under test. By leveraging the FPGA's reconfigurability and parallel processing capabilities, a speed up in fault detection can be achieved over previous computer simulation techniques. This work presents the following contributions to the field of Stuck-At-Fault detection: We present a new method for inserting faults into a circuit net list. Given any circuit netlist, our tool can insert multiplexers into a circuit at correct internal nodes to aid in fault emulation on reconfigurable hardware. We present a parallel method of fault emulation. The benefit of the FPGA is not only its ability to implement any circuit, but its ability to process data in parallel. This research utilizes this to create a more efficient emulation method that implements numerous copies of the same circuit in the FPGA. A new method to organize the most efficient faults. Most methods for determinin the minimum number of inputs to cover the most faults require sophisticated softwareprograms that use heuristics. By utilizing hardware, this research is able to process data faster and use a simpler method for an efficient way of minimizing inputs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seasonal appearance of a deep chlorophyll maximum (DCM) in Lake Superior is a striking phenomenon that is widely observed; however its mechanisms of formation and maintenance are not well understood. As this phenomenon may be the reflection of an ecological driver, or a driver itself, a lack of understanding its driving forces limits the ability to accurately predict and manage changes in this ecosystem. Key mechanisms generally associated with DCM dynamics (i.e. ecological, physiological and physical phenomena) are examined individually and in concert to establish their role. First the prevailing paradigm, “the DCM is a great place to live”, is analyzed through an integration of the results of laboratory experiments and field measurements. The analysis indicates that growth at this depth is severely restricted and thus not able to explain the full magnitude of this phenomenon. Additional contributing mechanisms like photoadaptation, settling and grazing are reviewed with a one-dimensional mathematical model of chlorophyll and particulate organic carbon. Settling has the strongest impact on the formation and maintenance of the DCM, transporting biomass to the metalimnion and resulting in the accumulation of algae, i.e. a peak in the particulate organic carbon profile. Subsequently, shade adaptation becomes manifest as a chlorophyll maximum deeper in the water column where light conditions particularly favor the process. Shade adaptation mediates the magnitude, shape and vertical position of the chlorophyll peak. Growth at DCM depth shows only a marginal contribution, while grazing has an adverse effect on the extent of the DCM. The observed separation of the carbon biomass and chlorophyll maximum should caution scientists to equate the DCM with a large nutrient pool that is available to higher trophic levels. The ecological significance of the DCM should not be separated from the underlying carbon dynamics. When evaluated in its entirety, the DCM becomes the projected image of a structure that remains elusive to measure but represents the foundation of all higher trophic levels. These results also offer guidance in examine ecosystem perturbations such as climate change. For example, warming would be expected to prolong the period of thermal stratification, extending the late summer period of suboptimal (phosphorus-limited) growth and attendant transport of phytoplankton to the metalimnion. This reduction in epilimnetic algal production would decrease the supply of algae to the metalimnion, possibly reducing the supply of prey to the grazer community. This work demonstrates the value of modeling to challenge and advance our understanding of ecosystem dynamics, steps vital to reliable testing of management alternatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the biggest challenges facing researchers trying to empirically test structural or institutional anomie theories is the operationalization of the key concept of anomie. This challenge is heightened by the data constraints involved in cross-national research. As a result, researchers have been forced to rely on surrogate or proxy measures of anomie and indirect tests of the theories. The purpose of this study is to examine an innovative and more theoretically sound measure of anomie and to test its ability to make cross-national predictions of serious crime. Our results are supportive of the efficacy of this construct to explain cross-national variations in crime rates. Nations with the highest rates of structural anomie also have the highest predicted rates of homicide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Partner notification is essential to the comprehensive case management of sexually transmitted infections. Systematic reviews and mathematical modelling can be used to synthesise information about the effects of new interventions to enhance the outcomes of partner notification. OBJECTIVE To study the effectiveness and cost-effectiveness of traditional and new partner notification technologies for curable sexually transmitted infections (STIs). DESIGN Secondary data analysis of clinical audit data; systematic reviews of randomised controlled trials (MEDLINE, EMBASE and Cochrane Central Register of Controlled Trials) published from 1 January 1966 to 31 August 2012 and of studies of health-related quality of life (HRQL) [MEDLINE, EMBASE, ISI Web of Knowledge, NHS Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE) and Health Technology Assessment (HTA)] published from 1 January 1980 to 31 December 2011; static models of clinical effectiveness and cost-effectiveness; and dynamic modelling studies to improve parameter estimation and examine effectiveness. SETTING General population and genitourinary medicine clinic attenders. PARTICIPANTS Heterosexual women and men. INTERVENTIONS Traditional partner notification by patient or provider referral, and new partner notification by expedited partner therapy (EPT) or its UK equivalent, accelerated partner therapy (APT). MAIN OUTCOME MEASURES Population prevalence; index case reinfection; and partners treated per index case. RESULTS Enhanced partner therapy reduced reinfection in index cases with curable STIs more than simple patient referral [risk ratio (RR) 0.71; 95% confidence interval (CI) 0.56 to 0.89]. There are no randomised trials of APT. The median number of partners treated for chlamydia per index case in UK clinics was 0.60. The number of partners needed to treat to interrupt transmission of chlamydia was lower for casual than for regular partners. In dynamic model simulations, > 10% of partners are chlamydia positive with look-back periods of up to 18 months. In the presence of a chlamydia screening programme that reduces population prevalence, treatment of current partners achieves most of the additional reduction in prevalence attributable to partner notification. Dynamic model simulations show that cotesting and treatment for chlamydia and gonorrhoea reduce the prevalence of both STIs. APT has a limited additional effect on prevalence but reduces the rate of index case reinfection. Published quality-adjusted life-year (QALY) weights were of insufficient quality to be used in a cost-effectiveness study of partner notification in this project. Using an intermediate outcome of cost per infection diagnosed, doubling the efficacy of partner notification from 0.4 to 0.8 partners treated per index case was more cost-effective than increasing chlamydia screening coverage. CONCLUSIONS There is evidence to support the improved clinical effectiveness of EPT in reducing index case reinfection. In a general heterosexual population, partner notification identifies new infected cases but the impact on chlamydia prevalence is limited. Partner notification to notify casual partners might have a greater impact than for regular partners in genitourinary clinic populations. Recommendations for future research are (1) to conduct randomised controlled trials using biological outcomes of the effectiveness of APT and of methods to increase testing for human immunodeficiency virus (HIV) and STIs after APT; (2) collection of HRQL data should be a priority to determine QALYs associated with the sequelae of curable STIs; and (3) standardised parameter sets for curable STIs should be developed for mathematical models of STI transmission that are used for policy-making. FUNDING The National Institute for Health Research Health Technology Assessment programme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES Quantitative sensory testing (QST) is widely used to investigate peripheral and central sensitization. However, the comparative performance of different QST for diagnostic or prognostic purposes is unclear. We explored the discriminative ability of different quantitative sensory tests in distinguishing between patients with chronic neck pain and pain-free control subjects and ranked these tests according to the extent of their association with pain hypersensitivity. METHODS We performed a case-control study in 40 patients and 300 control subjects. Twenty-six tests, including different modalities of pressure, heat, cold, and electrical stimulation, were used. As measures of discrimination, we estimated receiver operating characteristic curves and likelihood ratios. RESULTS The following quantitative sensory tests displayed the best discriminative value: (1) pressure pain threshold at the site of the most severe neck pain (fitted area under the receiver operating characteristic curve, 0.92), (2) reflex threshold to single electrical stimulation (0.90), (3) pain threshold to single electrical stimulation (0.89), (4) pain threshold to repeated electrical stimulation (0.87), and (5) pressure pain tolerance threshold at the site of the most severe neck pain (0.86). Only the first 3 could be used for both ruling in and out pain hypersensitivity. CONCLUSIONS Pressure stimulation at the site of the most severe pain and parameters of electrical stimulation were the most appropriate QST to distinguish between patients with chronic neck pain and asymptomatic control subjects. These findings may be used to select the tests in future diagnostic and longitudinal prognostic studies on patients with neck pain and to optimize the assessment of localized and spreading sensitization in chronic pain patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cerebellum is the major brain structure that contributes to our ability to improve movements through learning and experience. We have combined computer simulations with behavioral and lesion studies to investigate how modification of synaptic strength at two different sites within the cerebellum contributes to a simple form of motor learning—Pavlovian conditioning of the eyelid response. These studies are based on the wealth of knowledge about the intrinsic circuitry and physiology of the cerebellum and the straightforward manner in which this circuitry is engaged during eyelid conditioning. Thus, our simulations are constrained by the well-characterized synaptic organization of the cerebellum and further, the activity of cerebellar inputs during simulated eyelid conditioning is based on existing recording data. These simulations have allowed us to make two important predictions regarding the mechanisms underlying cerebellar function, which we have tested and confirmed with behavioral studies. The first prediction describes the mechanisms by which one of the sites of synaptic modification, the granule to Purkinje cell synapses (gr → Pkj) of the cerebellar cortex, could generate two time-dependent properties of eyelid conditioning—response timing and the ISI function. An empirical test of this prediction using small, electrolytic lesions of the cerebellar cortex revealed the pattern of results predicted by the simulations. The second prediction made by the simulations is that modification of synaptic strength at the other site of plasticity, the mossy fiber to deep nuclei synapses (mf → nuc), is under the control of Purkinje cell activity. The analysis predicts that this property should confer mf → nuc synapses with resistance to extinction. Thus, while extinction processes erase plasticity at the first site, residual plasticity at mf → nuc synapses remains. The residual plasticity at the mf → nuc site confers the cerebellum with the capability for rapid relearning long after the learned behavior has been extinguished. We confirmed this prediction using a lesion technique that reversibly disconnected the cerebellar cortex at various stages during extinction and reacquisition of eyelid responses. The results of these studies represent significant progress toward a complete understanding of how the cerebellum contributes to motor learning. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In July of 2002, the Sarbanes-Oxley Act was passed by Congress, including section 404 which requires the auditors to test and opine on the company's internal controls. Since that time there has been much debate about whether the intended benefits of increased investor confidence and financial statement transparency trump the unexpectedly high compliance costs, especially for public companies with market-caps less than $75 million. Before these companies begin complying in the upcoming year, interest groups are calling for the requirements to be 'scaled' to better fit the needs of these companies. While auditors already are expected to scale their audit approach to each individual client, more must be done to significantly decrease the costs in order to reverse the trend of small companies foregoing listing on U.S. capital markets. Increased guidance from the PCAOB, SEC, and other related parties could help the small-cap companies and their auditors be aware of best practices. Also, exempting industries that already follow similar guidelines or are significantly injured by the compliance requirements could help. Lastly, the controversial proposal of rotational audits could be put in place if the affected parties cooperate to remove the undue burden on these small-cap companies. Without some form of significant action, the investors could soon lose the ability to buy small-cap companies in U.S. markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Up to 10% of all breast and ovarian cancers are attributable to mutations in cancer susceptibility genes. Clinical genetic testing for deleterious gene mutations that predispose to hereditary breast and ovarian cancer (HBOC) syndrome is available. Mutation carriers may benefit from following high-risk guidelines for cancer prevention and early detection; however, few studies have reported the uptake of clinical genetic testing for HBOC. This study identified predictors of HBOC genetic testing uptake among a case series of 268 women who underwent genetic counseling at The University of Texas M. D. Anderson Cancer Center from October, 1996, through July, 2000. Women completed a baseline questionnaire that measured psychosocial and demographic variables. Additional medical characteristics were obtained from the medical charts. Logistic regression modeling identified predictors of participation in HBOC genetic testing. Psychological variables were hypothesized to be the strongest predictors of testing uptake—in particular, one's readiness (intention) to have testing. Testing uptake among all women in this study was 37% (n = 99). Contrary to the hypotheses, one's actual risk of carrying a BRCA1 or BRCA2 gene mutation was the strongest predictor of testing participation (OR = 15.37, CI = 5.15, 45.86). Other predictors included religious background, greater readiness to have testing, knowledge about HBOC and genetic testing, not having female children, and adherence to breast self-exam. Among the subgroup of women who were at ≥10% risk of carrying a mutation, 51% (n = 90) had genetic testing. Consistent with the hypotheses, predictors of testing participation in the high-risk subgroup included greater readiness to have testing, knowledge, and greater self-efficacy regarding one's ability to cope with test results. Women with CES-D scores ≥16, indicating the presence of depressive symptoms, were less likely to have genetic testing. Results indicate that among women with a wide range of risk for HBOC, actual risk of carrying an HBOC-predisposing mutation may be the strongest predictor of their decision to have genetic testing. Psychological variables (e.g., distress and self-efficacy) may influence testing participation only among women at highest risk of carrying a mutation, for whom genetic testing is most likely to be informative. ^