989 resultados para Module level tests


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objectives of this study were to develop and validate a tool for assessing pain in population-based observational studies and to develop three subscales for back/neck, upper extremity and lower extremity pain. Based on a literature review, items were extracted from validated questionnaires and reviewed by an expert panel. The initial questionnaire consisted of a pain manikin and 34 items relating to (i) intensity of pain in different body regions (7 items), (ii) pain during activities of daily living (18 items) and (iii) various pain modalities (9 items). Psychometric validation of the initial questionnaire was performed in a random sample of the German-speaking Swiss population. Analyses included tests for reliability, correlation analysis, principal components factor analysis, tests for internal consistency and validity. Overall, 16,634 of 23,763 eligible individuals participated (70%). Test-retest reliability coefficients ranged from 0.32 to 0.97, but only three coefficients were below 0.60. Subscales were constructed combining four items for each of the subscales. Item-total coefficients ranged from 0.76 to 0.86 and Cronbach's alpha were 0.75 or higher for all subscales. Correlation coefficients between subscales and three validated instruments (WOMAC, SPADI and Oswestry) ranged from 0.62 to 0.79. The final Pain Standard Evaluation Questionnaire (SEQ Pain) included 28 items and the pain manikin and accounted for the multidimensionality of pain by assessing pain location and intensity, pain during activity, triggers and time of onset of pain and frequency of pain medication. It was found to be reliable and valid for the assessment of pain in population-based observational studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate the function of the parotid glands before and during gustatory stimulation, using an intrinsic susceptibility-weighted MRI method (blood oxygenation level dependent, BOLD-MRI) at 1.5T and 3T. MATERIALS AND METHODS: A total of 10 and 13 volunteers were investigated at 1.5T and 3T, respectively. Measurements were performed before and during gustatory stimulation using ascorbate. Circular regions of interest (ROIs) were delineated in the left and right parotid glands, and in the masseter muscle for comparison. The effects of stimulation were evaluated by calculating the difference between the relaxation rates, DeltaR(2)*. Baseline and stimulation were statistically compared (Student's t-tests), merging both parotid glands. RESULTS: The averaged DeltaR(2)* values prestimulation obtained in all parotid glands were stable (-0.61 to 0.38 x 10(-3) seconds(-1)). At 3T, these values were characterized by an initial drop (to -2.7 x 10(-3) seconds(-1)) followed by a progressive increase toward the baseline. No significant difference was observed between baseline and parotid gland stimulation at 1.5T, neither for the masseter muscle at both field strengths. A considerable interindividual variability (over 76%) was noticed at both magnetic fields. CONCLUSION: BOLD-MRI at 3T was able to detect DeltaR(2)* changes in the parotid glands during gustatory stimulation, consistent with an increase in oxygen consumption during saliva production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To prospectively determine if changes in intrarenal oxygenation during acute unilateral ureteral obstruction can be depicted with blood oxygen level-dependent (BOLD) magnetic resonance (MR) imaging. MATERIALS AND METHODS: The study was approved by the local ethics committee, and written informed consent was obtained from all patients. BOLD MR imaging was performed in 10 male patients (mean age, 45 years +/- 17 [standard deviation]; range, 20-73 years) with a distal unilateral ureteral calculus and in 10 healthy age-matched male volunteers to estimate R2*, which is inversely related to tissue Po(2). R2* values were determined in the cortex and medulla of the obstructed and the contralateral nonobstructed kidneys. To reduce external effects on R2*, the R2* ratio between the medulla and cortex was also analyzed. Statistical analysis was performed with nonparametric rank tests. P < .05 was considered to indicate a significant difference. RESULTS: All patients had significantly lower medullary and cortical R2* values in the obstructed kidney (median R2* in medulla, 10.9 sec(-1) [range, 9.1-14.3 sec(-1)]; median R2* in cortex, 10.4 sec(-1) [range, 9.7-11.3 sec(-1)]) than in the nonobstructed kidney (median R2* in medulla, 17.2 sec(-1) [range, 14.6-23.2 sec(-1)], P = .005; median R2* in cortex, 11.7 sec(-1) [range, 11.0-14.0 sec(-1)], P = .005); values in the obstructed kidneys were also significantly lower than values in the kidneys of healthy control subjects (median R2* in medulla, 16.1 sec(-1) [range, 13.9-18.1 sec(-1)], P < .001; median R2* in cortex, 11.6 sec(-1) [range, 10.5-12.9 sec(-1)], P < .001). R2* ratios in the obstructed kidneys (median, 1.06; range, 0.85-1.27) were significantly lower than those in the nonobstructed kidneys (median, 1.49; range, 1.26-1.71; P = .005) and those in the kidneys of healthy control subjects (median, 1.38; range, 1.23-1.47; P < .001). In contrast, R2* ratios in the nonobstructed kidneys of patients were significantly higher than those in kidneys of healthy control subjects (P = .01). CONCLUSION: Increased oxygen content in the renal cortex and medulla occurs with acute unilateral ureteral obstruction, suggesting reduced function of the affected kidney.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The U.S. Renewable Fuel Standard mandates that by 2022, 36 billion gallons of renewable fuels must be produced on a yearly basis. Ethanol production is capped at 15 billion gallons, meaning 21 billion gallons must come from different alternative fuel sources. A viable alternative to reach the remainder of this mandate is iso-butanol. Unlike ethanol, iso-butanol does not phase separate when mixed with water, meaning it can be transported using traditional pipeline methods. Iso-butanol also has a lower oxygen content by mass, meaning it can displace more petroleum while maintaining the same oxygen concentration in the fuel blend. This research focused on studying the effects of low level alcohol fuels on marine engine emissions to assess the possibility of using iso-butanol as a replacement for ethanol. Three marine engines were used in this study, representing a wide range of what is currently in service in the United States. Two four-stroke engine and one two-stroke engine powered boats were tested in the tributaries of the Chesapeake Bay, near Annapolis, Maryland over the course of two rounds of weeklong testing in May and September. The engines were tested using a standard test cycle and emissions were sampled using constant volume sampling techniques. Specific emissions for two-stroke and four-stroke engines were compared to the baseline indolene tests. Because of the nature of the field testing, limited engine parameters were recorded. Therefore, the engine parameters analyzed aside from emissions were the operating relative air-to-fuel ratio and engine speed. Emissions trends from the baseline test to each alcohol fuel for the four-stroke engines were consistent, when analyzing a single round of testing. The same trends were not consistent when comparing separate rounds because of uncontrolled weather conditions and because the four-stroke engines operate without fuel control feedback during full load conditions. Emissions trends from the baseline test to each alcohol fuel for the two-stroke engine were consistent for all rounds of testing. This is due to the fact the engine operates open-loop, and does not provide fueling compensation when fuel composition changes. Changes in emissions with respect to the baseline for iso-butanol were consistent with changes for ethanol. It was determined iso-butanol would make a viable replacement for ethanol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many schools do not begin to introduce college students to software engineering until they have had at least one semester of programming. Since software engineering is a large, complex, and abstract subject it is difficult to construct active learning exercises that build on the students’ elementary knowledge of programming and still teach basic software engineering principles. It is also the case that beginning students typically know how to construct small programs, but they have little experience with the techniques necessary to produce reliable and long-term maintainable modules. I have addressed these two concerns by defining a local standard (Montana Tech Method (MTM) Software Development Standard for Small Modules Template) that step-by-step directs students toward the construction of highly reliable small modules using well known, best-practices software engineering techniques. “Small module” is here defined as a coherent development task that can be unit tested, and can be car ried out by a single (or a pair of) software engineer(s) in at most a few weeks. The standard describes the process to be used and also provides a template for the top-level documentation. The instructional module’s sequence of mini-lectures and exercises associated with the use of this (and other) local standards are used throughout the course, which perforce covers more abstract software engineering material using traditional reading and writing assignments. The sequence of mini-lectures and hands-on assignments (many of which are done in small groups) constitutes an instructional module that can be used in any similar software engineering course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discussing new or recently reformed citizenship tests in the USA, Australia, and Canada, this article asks whether they amount to a restrictive turn of new world citizenship, similar to recent developments in Europe. I argue that elements of a restrictive turn are noticeable in Australia and Canada, but only at the level of political rhetoric, not of law and policy, which remain liberal and inclusive. Much like in Europe, the restrictive turn is tantamount to Muslims and Islam moving to the center of the integration debate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND While the assessment of analytical precision within medical laboratories has received much attention in scientific enquiry, the degree of as well as the sources causing variation between them remains incompletely understood. In this study, we quantified the variance components when performing coagulation tests with identical analytical platforms in different laboratories and computed intraclass correlations coefficients (ICC) for each coagulation test. METHODS Data from eight laboratories measuring fibrinogen twice in twenty healthy subjects with one out of 3 different platforms and single measurements of prothrombin time (PT), and coagulation factors II, V, VII, VIII, IX, X, XI and XIII were analysed. By platform, the variance components of (i) the subjects, (ii) the laboratory and the technician and (iii) the total variance were obtained for fibrinogen as well as (i) and (iii) for the remaining factors using ANOVA. RESULTS The variability for fibrinogen measurements within a laboratory ranged from 0.02 to 0.04, the variability between laboratories ranged from 0.006 to 0.097. The ICC for fibrinogen ranged from 0.37 to 0.66 and from 0.19 to 0.80 for PT between the platforms. For the remaining factors the ICC's ranged from 0.04 (FII) to 0.93 (FVIII). CONCLUSIONS Variance components that could be attributed to technicians or laboratory procedures were substantial, led to disappointingly low intraclass correlation coefficients for several factors and were pronounced for some of the platforms. Our findings call for sustained efforts to raise the level of standardization of structures and procedures involved in the quantification of coagulation factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Impairment of cognitive performance during and after high-altitude climbing has been described in numerous studies and has mostly been attributed to cerebral hypoxia and resulting functional and structural cerebral alterations. To investigate the hypothesis that high-altitude climbing leads to cognitive impairment, we used of neuropsychological tests and measurements of eye movement (EM) performance during different stimulus conditions. The study was conducted in 32 mountaineers participating in an expedition to Muztagh Ata (7,546 m). Neuropsychological tests comprised figural fluency, line bisection, letter and number cancellation, and a modified pegboard task. Saccadic performance was evaluated under three stimulus conditions with varying degrees of cortical involvement: visually guided pro- and anti-saccades, and visuo-visual interaction. Typical saccade parameters (latency, mean sequence, post-saccadic stability, and error rate) were computed off-line. Measurements were taken at a baseline level of 440 m and at altitudes of 4,497, 5,533, 6,265, and again at 440 m. All subjects reached 5,533 m, and 28 reached 6,265 m. The neuropsychological test results did not reveal any cognitive impairment. Complete eye movement recordings for all stimulus conditions were obtained in 24 subjects at baseline and at least two altitudes and in 10 subjects at baseline and all altitudes. Measurements of saccade performances showed no dependence on any altitude-related parameter and were well within normal limits. Our data indicates that acclimatized climbers do not seem to suffer from significant cognitive deficits during or after climbs to altitudes above 7,500 m. We demonstrated that investigation of EMs is feasible during high-altitude expeditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

-pshare- computes and graphs percentile shares from individual level data. Percentile shares are often used in inequality research to study the distribution of income or wealth. They are defined as differences between Lorenz ordinates of the outcome variable. Technically, the observations are sorted in increasing order of the outcome variable and the specified percentiles are computed from the running sum of the outcomes. Percentile shares are then computed as differences between percentiles, divided by total outcome. pshare requires moremata to be installed on the system; see ssc describe moremata.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores whether a significant long-run relationship exists between money and nominal GDP and between money and the price level in the Venezuelan economy. We apply time-series econometric techniques to annual data for the Venezuelan economy for 1950 to 1996. An important feature of our analysis is the use of tests for unit roots and cointegration with structural breaks. Certain characteristics of the Venezuelan experience suggest that structural breaks may be important. Since the economy depends heavily on oil revenue, oil price shocks have had important influences on most macroeconomic variables. Also since the economy possesses large foreign debt, the world debt crisis that exploded in 1982 had pervasive effects on the Venezuelan economy. Radical changes in economic policy and political instability may have also significantly affected the movement of the macroeconomy. We find that a long-run relationship exists between narrow money (M1) and nominal GDP, the GDP deflator, and the CPI when one makes allowances for one or two structural breaks. We do not find such long-run relationships when broad money (M2) is used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We apply the efficient unit-roots tests of Elliott, Rothenberg, and Stock (1996), and Elliott (1998) to twenty-one real exchange rates using monthly data of the G-7 countries from the post-Bretton Woods floating exchange rate period. Our results indicate that, for eighteen out of the twenty-one real exchange rates, the null hypothesis of a unit root can be rejected at the 10% significance level or better using the Elliot et al (1996) DF-GLS test. The unit-root null hypothesis is also rejected for one additional real exchange rate when we allow for one endogenously determined break in the time series of the real exchange rate as in Perron (1997). In all, we find favorable evidence to support long-run purchasing power parity in nineteen out of twenty-one real exchange rates. Second, we find no strong evidence to suggest that the use of non-U.S. dollar-based real exchange rates tend to produce more favorable result for long-run PPP than the use of U.S. dollar-based real exchange rates as Lothian (1998) has concluded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linkage and association studies are major analytical tools to search for susceptibility genes for complex diseases. With the availability of large collection of single nucleotide polymorphisms (SNPs) and the rapid progresses for high throughput genotyping technologies, together with the ambitious goals of the International HapMap Project, genetic markers covering the whole genome will be available for genome-wide linkage and association studies. In order not to inflate the type I error rate in performing genome-wide linkage and association studies, multiple adjustment for the significant level for each independent linkage and/or association test is required, and this has led to the suggestion of genome-wide significant cut-off as low as 5 × 10 −7. Almost no linkage and/or association study can meet such a stringent threshold by the standard statistical methods. Developing new statistics with high power is urgently needed to tackle this problem. This dissertation proposes and explores a class of novel test statistics that can be used in both population-based and family-based genetic data by employing a completely new strategy, which uses nonlinear transformation of the sample means to construct test statistics for linkage and association studies. Extensive simulation studies are used to illustrate the properties of the nonlinear test statistics. Power calculations are performed using both analytical and empirical methods. Finally, real data sets are analyzed with the nonlinear test statistics. Results show that the nonlinear test statistics have correct type I error rates, and most of the studied nonlinear test statistics have higher power than the standard chi-square test. This dissertation introduces a new idea to design novel test statistics with high power and might open new ways to mapping susceptibility genes for complex diseases. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is a retrospective longitudinal study at Texas Children's Hospital, a 350-bed tertiary level pediatric teaching hospital in Houston, Texas, for the period 1990 to 2006. It measured the incidence and trends of positive pre-employment drug tests among new job applicants At TCH. ^ Over the study period, 16,219 job applicants underwent pre-employment drug screening at TCH. Of these, 330 applicants (2%) tested positive on both the EMIT and GC/MS. After review by the medical review officer, the number of true drug test positive applicants decreased to 126 (0.78%). ^ According to the overall annual positive drug test incidence rates, the highest overall incidence was in 2002 (14.71 per 1000 tests) and the lowest in 2004 (3.17 per 1000 tests). Despite a marked increase in 2002, over the 15-year study period the overall incidence tended to decrease. Incidence rates and trends of other illegal drugs are further discussed in the study. And in general, these incidence rates also decline in the study period. In addition to that, we found the overall, positive drug tests were more common in females than in males (55.5% versus 44.4%). ^