118 resultados para Test Case Generator
Resumo:
This case study presents four and a half years of audiological observations, testing and aural habilitation of a female child with a partial agenesis of the corpus callosum (ACC). The ACC was diagnosed by MRI scan performed at 6 months of age to eliminate neurological causes for the developmental delay. This child was also born with a cleft palate and was diagnosed with Robinow Syndrome at 3 years and 3 months of age. The audiological results showed an improvement in hearing thresholds over the 4-year period. The child’s ophthalmologist also reported an improvement in visual skills over time. The most interesting aspect of the child’s hearing was the discrepancy between the monaural and the binaural results. That is, when assessed binaurally she often presented with a mild to moderate mixed loss and, when assessed monaurally, she showed a moderate to severe mixed loss for the right ear and a severe mixed loss for the left ear. Over time, the discrepancy between the monaural and binaural results changed. When assessed binaurally, the loss decreased to normal low frequency hearing sloping to a mild high frequency loss. When assessed monaurally, the most recent results showed a mild loss for the right ear and a moderate loss for the left ear. This discrepancy between binaural and monaural results was evident for both aided and unaided tests. For the most recent thresholds, the binaural results were consistent with the right monaural thresholds for the first time over the four and a half years. Parental reports of the child’s hearing were consistent with the binaural clinical results. This case indicates the need for audiologists to (1) carefully monitor the hearing of children with ACC, (2) obtain monaural and binaural hearing and aided thresholds results, and (3) compare these children’s functional abilities with the objective test results obtained. This case does question whether hearing aids are appropriate for children with ACC. If hearing aids are deemed to be appropriate, then hearing aids with compression characteristics should be considered.
Resumo:
This paper tests the explanatory capacities of different versions of new institutionalism by examining the Australian case of a general transition in central banking practice and monetary politics: namely, the increased emphasis on low inflation and central bank independence. Standard versions of rational choice institutionalism largely dominate the literature on the politics of central banking, but this approach (here termed RC1) fails to account for Australian empirics. RC1 has a tendency to establish actor preferences exogenously to the analysis; actors' motives are also assumed a priori; actor's preferences are depicted in relatively static, ahistorical terms. And there is the tendency, even a methodological requirement, to assume relatively simple motives and preference sets among actors, in part because of the game theoretic nature of RC1 reasoning. It is possible to build a more accurate rational choice model by re-specifying and essentially updating the context, incentives and choice sets that have driven rational choice in this case. Enter RC2. However, this move subtly introduces methodological shifts and new theoretical challenges. By contrast, historical institutionalism uses an inductive methodology. Compared with deduction, it is arguably better able to deal with complexity and nuance. It also utilises a dynamic, historical approach, and specifies (dynamically) endogenous preference formation by interpretive actors. Historical institutionalism is also able to more easily incorporate a wider set of key explanatory variables and incorporate wider social aggregates. Hence, it is argued that historical institutionalism is the preferred explanatory theory and methodology in this case.
Resumo:
Chlorophyll fluorescence measurements have a wide range of applications from basic understanding of photosynthesis functioning to plant environmental stress responses and direct assessments of plant health. The measured signal is the fluorescence intensity (expressed in relative units) and the most meaningful data are derived from the time dependent increase in fluorescence intensity achieved upon application of continuous bright light to a previously dark adapted sample. The fluorescence response changes over time and is termed the Kautsky curve or chlorophyll fluorescence transient. Recently, Strasser and Strasser (1995) formulated a group of fluorescence parameters, called the JIP-test, that quantify the stepwise flow of energy through Photosystem II, using input data from the fluorescence transient. The purpose of this study was to establish relationships between the biochemical reactions occurring in PS II and specific JIP-test parameters. This was approached using isolated systems that facilitated the addition of modifying agents, a PS II electron transport inhibitor, an electron acceptor and an uncoupler, whose effects on PS II activity are well documented in the literature. The alteration to PS II activity caused by each of these compounds could then be monitored through the JIP-test parameters and compared and contrasted with the literature. The known alteration in PS II activity of Chenopodium album atrazine resistant and sensitive biotypes was also used to gauge the effectiveness and sensitivity of the JIP-test. The information gained from the in vitro study was successfully applied to an in situ study. This is the first in a series of four papers. It shows that the trapping parameters of the JIP-test were most affected by illumination and that the reduction in trapping had a run-on effect to inhibit electron transport. When irradiance exposure proceeded to photoinhibition, the electron transport probability parameter was greatly reduced and dissipation significantly increased. These results illustrate the advantage of monitoring a number of fluorescence parameters over the use of just one, which is often the case when the F-V/F-M ratio is used.
Resumo:
The present study aimed to 1) examine the relationship between laboratory-based measures and high-intensity ultraendurance (HIU) performance during an intermittent 24-h relay ultraendurance mountain bike race (similar to20 min cycling, similar to60min recovery), and 2) examine physiological and performance based changes throughout the HIU event. Prior to the HIU event, four highly-trained male cyclists (age = 24.0 +/- 2.1 yr; mass = 75.0 +/- 2.7 kg; (V)over dot O-2peak = 70 +/- 3 ml.kg(-1).min(-1)) performed 1) a progressive exercise test to determine peak Volume of oxygen uptake ((V)over dot O-2peak), peak power output (PPO), and ventilatory threshold (T-vent), 2) time-to-fatigue tests at 100% (TF100) and 150% of PPO (TF150), and 3) a laboratory simulated 40-km time trial (TT40). Blood lactate (Lac(-)), haematocrit and haemoglobin were measured at 6-h intervals throughout the HIU event, while heart rate (HR) was recorded continuously. Intermittent HIU performance, performance HR, recovery HR, and Lac declined (P < 0.05), while plasma volume expanded (P < 0.05) during the HIU event. TF100 was related to the decline in lap time (r = -0.96; P < 0.05), and a trend (P = 0.081) was found between TF150 and average intermittent HIU speed (r = 0.92). However, other measures (V)over dot O-2peak, PPO, T-vent, and TT40) were not related to HIU performance. Measures of high-intensity endurance performance (TF100, TF150) were better predictors of intermittent HIU performance than traditional laboratory-based measures of aerobic capacity.
Resumo:
Sustainable forest restoration and management practices require a thorough understanding of the influence that habitat fragmentation has on the processes shaping genetic variation and its distribution in tree populations. We quantified genetic variation at isozyme markers and chloroplast DNA (cpDNA), analysed by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) in severely fragmented populations of Sorbus aucuparia (Rosaceae) in a single catchment (Moffat) in southern Scotland. Remnants maintain surprisingly high levels of gene diversity (H-E) for isozymes (H-E = 0.195) and cpDNA markers (H-E = 0.490). Estimates are very similar to those from non-fragmented populations in continental Europe, even though the latter were sampled over a much larger spatial scale. Overall, no genetic bottleneck or departures from random mating were detected in the Moffat fragments. However, genetic differentiation among remnants was detected for both types of marker (isozymes Theta(n) = 0.043, cpDNA Theta(c) = 0.131; G-test, P-value < 0.001). In this self-incompatible, insect-pollinated, bird-dispersed tree species, the estimated ratio of pollen flow to seed flow between fragments is close to 1 (r = 1.36). Reduced pollen-mediated gene flow is a likely consequence of habitat fragmentation, but effective seed dispersal by birds is probably helping to maintain high levels of genetic diversity within remnants and reduce genetic differentiation between them.
Resumo:
Assessments for assigning the conservation status of threatened species that are based purely on subjective judgements become problematic because assessments can be influenced by hidden assumptions, personal biases and perceptions of risks, making the assessment process difficult to repeat. This can result in inconsistent assessments and misclassifications, which can lead to a lack of confidence in species assessments. It is almost impossible to Understand an expert's logic or visualise the underlying reasoning behind the many hidden assumptions used throughout the assessment process. In this paper, we formalise the decision making process of experts, by capturing their logical ordering of information, their assumptions and reasoning, and transferring them into a set of decisions rules. We illustrate this through the process used to evaluate the conservation status of species under the NatureServe system (Master, 1991). NatureServe status assessments have been used for over two decades to set conservation priorities for threatened species throughout North America. We develop a conditional point-scoring method, to reflect the current subjective process. In two test comparisons, 77% of species' assessments using the explicit NatureServe method matched the qualitative assessments done subjectively by NatureServe staff. Of those that differed, no rank varied by more than one rank level under the two methods. In general, the explicit NatureServe method tended to be more precautionary than the subjective assessments. The rank differences that emerged from the comparisons may be due, at least in part, to the flexibility of the qualitative system, which allows different factors to be weighted on a species-by-species basis according to expert judgement. The method outlined in this study is the first documented attempt to explicitly define a transparent process for weighting and combining factors under the NatureServe system. The process of eliciting expert knowledge identifies how information is combined and highlights any inconsistent logic that may not be obvious in Subjective decisions. The method provides a repeatable, transparent, and explicit benchmark for feedback, further development, and improvement. (C) 2004 Elsevier SAS. All rights reserved.
Resumo:
Achieving consistency between a specification and its implementation is an important part of software development In previous work, we have presented a method and tool support for testing a formal specification using animation and then verifying an implementation of that specification. The method is based on a testgraph, which provides a partial model of the application under test. The testgraph is used in combination with an animator to generate test sequences for testing the formal specification. The same testgraph is used during testing to execute those same sequences on the implementation and to ensure that the implementation conforms to the specification. So far, the method and its tool support have been applied to software components that can be accessed through an application programmer interface (API). In this paper, we use an industrially-based case study to discuss the problems associated with applying the method to a software system with a graphical user interface (GUI). In particular, the lack of a standardised interface, as well as controllability and observability problems, make it difficult to automate the testing of the implementation. The method can still be applied, but the amount of testing that can be carried on the implementation is limited by the manual effort involved.
Resumo:
Fuzzy signal detection analysis can be a useful complementary technique to traditional signal detection theory analysis methods, particularly in applied settings. For example, traffic situations are better conceived as being on a continuum from no potential for hazard to high potential, rather than either having potential or not having potential. This study examined the relative contribution of sensitivity and response bias to explaining differences in the hazard perception performance of novices and experienced drivers, and the effect of a training manipulation. Novice drivers and experienced drivers were compared (N = 64). Half the novices received training, while the experienced drivers and half the novices remained untrained. Participants completed a hazard perception test and rated potential for hazard in occluded scenes. The response latency of participants to the hazard perception test replicated previous findings of experienced/novice differences and trained/untrained differences. Fuzzy signal detection analysis of both the hazard perception task and the occluded rating task suggested that response bias may be more central to hazard perception test performance than sensitivity, with trained and experienced drivers responding faster and with a more liberal bias than untrained novices. Implications for driver training and the hazard perception test are discussed.
Resumo:
Parkinson’s disease (PD) is a progressive, degenerative, neurological disease. The progressive disability associated with PD results in substantial burdens for those with the condition, their families and society in terms of increased health resource use, earnings loss of affected individuals and family caregivers, poorer quality of life, caregiver burden, disrupted family relationships, decreased social and leisure activities, and deteriorating emotional well-being. Currently, no cure is available and the efficacy of available treatments, such as medication and surgical interventions, decreases with longer duration of the disease. Whilst the cause of PD is unknown, genetic and environmental factors are believed to contribute to its aetiology. Descriptive and analytical epidemiological studies have been conducted in a number of countries in an effort to elucidate the cause, or causes, of PD. Rural residency, farming, well water consumption, pesticide exposure, metals and solvents have been implicated as potential risk factors for PD in some previous epidemiological studies. However, there is substantial disagreement between the results of existing studies. Therefore, the role of environmental exposures in the aetiology of PD remains unclear. The main component of this thesis consists of a case-control study that assessed the contribution of environmental exposures to the risk of developing PD. An existing, previously unanalysed, dataset from a local case-control study was analysed to inform the design of the new case-control study. The analysis results suggested that regular exposure to pesticides and head injury were important risk factors for PD. However, due to the substantial limitations of this existing study, further confirmation of these results was desirable with a more robustly designed epidemiological study. A new exposure measurement instrument (a structured interviewer-delivered questionnaire) was developed for the new case-control study to obtain data on demographic, lifestyle, environmental and medical factors. Prior to its use in the case-control study, the questionnaire was assessed for test-retest repeatability in a series of 32 PD cases and 29 healthy sex-, age- and residential suburb-matched electoral roll controls. High repeatability was demonstrated for lifestyle exposures, such as smoking and coffee/tea consumption (kappas 0.70-1.00). The majority of environmental exposures, including use of pesticides, solvents and exposure to metal dusts and fumes, also showed high repeatability (kappas >0.78). A consecutive series of 163 PD case participants was recruited from a neurology clinic in Brisbane. One hundred and fifty-one (151) control participants were randomly selected from the Australian Commonwealth Electoral Roll and individually matched to the PD cases on age (± 2 years), sex and current residential suburb. Participants ranged in age from 40-89 years (mean age 67 years). Exposure data were collected in face-to-face interviews. Odds ratios and 95% confidence intervals were calculated using conditional logistic regression for matched sets in SAS version 9.1. Consistent with previous studies, ever having been a regular smoker or coffee drinker was inversely associated with PD with dose-response relationships evident for packyears smoked and number of cups of coffee drunk per day. Passive smoking from ever having lived with a smoker or worked in a smoky workplace was also inversely related to PD. Ever having been a regular tea drinker was associated with decreased odds of PD. Hobby gardening was inversely associated with PD. However, use of fungicides in the home garden or occupationally was associated with increased odds of PD. Exposure to welding fumes, cleaning solvents, or thinners occupationally was associated with increased odds of PD. Ever having resided in a rural or remote area was inversely associated with PD. Ever having resided on a farm was only associated with moderately increased odds of PD. Whilst the current study’s results suggest that environmental exposures on their own are only modest contributors to overall PD risk, the possibility that interaction with genetic factors may additively or synergistically increase risk should be considered. The results of this research support the theory that PD has a multifactorial aetiology and that environmental exposures are some of a number of factors to contribute to PD risk. There was also evidence of interaction between some factors (eg smoking and welding) to moderate PD risk.
Resumo:
The calculation of quantum dynamics is currently a central issue in theoretical physics, with diverse applications ranging from ultracold atomic Bose-Einstein condensates to condensed matter, biology, and even astrophysics. Here we demonstrate a conceptually simple method of determining the regime of validity of stochastic simulations of unitary quantum dynamics by employing a time-reversal test. We apply this test to a simulation of the evolution of a quantum anharmonic oscillator with up to 6.022×1023 (Avogadro's number) of particles. This system is realizable as a Bose-Einstein condensate in an optical lattice, for which the time-reversal procedure could be implemented experimentally.
Resumo:
In the first of two articles presenting the case for emotional intelligence in a point/counterpoint exchange, we present a brief summary of research in the field, and rebut arguments against the construct presented in this issue.We identify three streams of research: (1) a four-branch abilities test based on the model of emotional intelligence defined in Mayer and Salovey (1997); (2) self-report instruments based on the Mayer–Salovey model; and (3) commercially available tests that go beyond the Mayer–Salovey definition. In response to the criticisms of the construct, we argue that the protagonists have not distinguished adequately between the streams, and have inappropriately characterized emotional intelligence as a variant of social intelligence. More significantly, two of the critical authors assert incorrectly that emotional intelligence research is driven by a utopian political agenda, rather than scientific interest. We argue, on the contrary, that emotional intelligence research is grounded in recent scientific advances in the study of emotion; specifically regarding the role emotion plays in organizational behavior. We conclude that emotional intelligence is attracting deserved continuing research interest as an individual difference variable in organizational behavior related to the way members perceive, understand, and manage their emotions.
Resumo:
In this second counterpoint article, we refute the claims of Landy, Locke, and Conte, and make the more specific case for our perspective, which is that ability-based models of emotional intelligence have value to add in the domain of organizational psychology. In this article, we address remaining issues, such as general concerns about the tenor and tone of the debates on this topic, a tendency for detractors to collapse across emotional intelligence models when reviewing the evidence and making judgments, and subsequent penchant to thereby discount all models, including the ability-based one, as lacking validity. We specifically refute the following three claims from our critics with the most recent empirically based evidence: (1) emotional intelligence is dominated by opportunistic academics-turned-consultants who have amassed much fame and fortune based on a concept that is shabby science at best; (2) the measurement of emotional intelligence is grounded in unstable, psychometrically flawed instruments, which have not demonstrated appropriate discriminant and predictive validity to warrant/justify their use; and (3) there is weak empirical evidence that emotional intelligence is related to anything of importance in organizations. We thus end with an overview of the empirical evidence supporting the role of emotional intelligence in organizational and social behavior.
Resumo:
A diagnostic PCR assay was designed based on conserved regions of previously sequenced densovirus genomic DNA isolated from mosquitoes. Application of this assay to different insect cell lines resulted in a number of cases of consistent positive amplification of the predicted size fragment. Positive PCR results were subsequently confirmed to correlate with densovirus infection by both electron microscopy and indirect fluorescent antibody test. In each case the nucleotide sequence of the amplified PCR fragments showed high identity to previously reported densoviruses isolated from mosquitoes. Phylogenetic analysis based on these sequences showed that two of these isolates were examples of new densoviruses. These viruses could infect and replicate in mosquitoes when administered orally or parenterally and these infections were largely avirulent. In one virus/mosquito combination vertical transmission to progeny was observed. The frequency with which these viruses were detected would suggest that they may be quite common in insect cell lines.
Resumo:
Multiple sclerosis and idiopathic dilated cardiomyopathy are two conditions in which an autoimmune process is implicated in the pathogenesis. There is evidence to support clustering of autoimmune diseases in patients with multiple sclerosis and their families. To our knowledge, this is the first report of idiopathic dilated cardiomyopathy occurring in a patient with multiple sclerosis.