970 resultados para validity testing
Resumo:
Long-span bridges are flexible and therefore are sensitive to wind induced effects. One way to improve the stability of long span bridges against flutter is to use cross-sections that involve twin side-by-side decks. However, this can amplify responses due to vortex induced oscillations. Wind tunnel testing is a well-established practice to evaluate the stability of bridges against wind loads. In order to study the response of the prototype in laboratory, dynamic similarity requirements should be satisfied. One of the parameters that is normally violated in wind tunnel testing is Reynolds number. In this dissertation, the effects of Reynolds number on the aerodynamics of a double deck bridge were evaluated by measuring fluctuating forces on a motionless sectional model of a bridge at different wind speeds representing different Reynolds regimes. Also, the efficacy of vortex mitigation devices was evaluated at different Reynolds number regimes. One other parameter that is frequently ignored in wind tunnel studies is the correct simulation of turbulence characteristics. Due to the difficulties in simulating flow with large turbulence length scale on a sectional model, wind tunnel tests are often performed in smooth flow as a conservative approach. The validity of simplifying assumptions in calculation of buffeting loads, as the direct impact of turbulence, needs to be verified for twin deck bridges. The effects of turbulence characteristics were investigated by testing sectional models of a twin deck bridge under two different turbulent flow conditions. Not only the flow properties play an important role on the aerodynamic response of the bridge, but also the geometry of the cross section shape is expected to have significant effects. In this dissertation, the effects of deck details, such as width of the gap between the twin decks, and traffic barriers on the aerodynamic characteristics of a twin deck bridge were investigated, particularly on the vortex shedding forces with the aim of clarifying how these shape details can alter the wind induced responses. Finally, a summary of the issues that are involved in designing a dynamic test rig for high Reynolds number tests is given, using the studied cross section as an example.
Resumo:
Background: Obstructive airway diseases (OADs) are among the leading causes of morbidity and mortality worldwide. Shortness of breath (SOB) is the main symptom associated with OADs. International guidelines from the Global Initiative for Chronic Lung Disease (GOLD) and the Global Initiative for Asthma (GINA) have recommended spirometry as an indispensable tool for the diagnosis of asthma and chronic obstructive pulmonary diseases (COPD), but spirometry is rarely used in family practice. Simple and reliable diagnostic tools are necessary for screening community patients with onset of OADs for timely management. Purpose: This thesis examined screening utility of the PiKo-6 forced expiratory volume in one second (pFEV₁) , in six second (pFEV₆), and the pRatio ( pFEV₁/pFEV₆) in SOB patients for OADs in community pharmacy settings. FEV₆ has recently been suggested an excellent surrogate for Forced Vital Capacity (FVC), which requires maximum exhalation of the lungs. Methods: Patients with SOB symptoms who were prescribed pulmonary inhalers, by their family physicians, were recruited via community pharmacies. Trained pharmacists collected two PiKo-6 tests to assess the repeatability of the PiKo-6 device. All patients performed laboratory spirometry ( FEV₁, FVC and FEV₁/FVC) to obtain physician diagnosis of their OADs. The results of the PiKo-6 spirometer and laboratory spirometer were compared. In addition, the PiKo-6 pRatio and laboratory FEV₁/FVC were assessed against physician diagnosed COPD. Results: Sixty three patients volunteered to perform the PiKo-6 spirometry. Of these, 52.4 % were men (age 53.9 ± 15.3 years; BMI 31.9 ± 7.40 kg/m2). Repeated testing with pFEV₁, pFEV6 and pRatio correlated significantly (within correlation, r = 0.835, p-Value≤ 0.05 ; 0.872, p- Value≤ 0.05; and 0.664, p-Value≤ 0.05). In addition, pFEV₁, pFEV6 and pRatio correlated significantly with FEV₁, FVC and FEV₁/FVC, respectively (between correlation = 0.630, p- Value≤ 0.05 ; 0.660, p-Value≤ 0.05 and 0.580, p-Value≤ 0.05). The cut-off value corresponding to the greatest sum of sensitivity and specificity of pRatio for physician-diagnosed COPD was <0.80, the sensitivity and specificity were 84 % and 50%, respectively. Conclusions The portable PiKo-6 correlates moderately well with the standard spirometry, when delivered by community pharmacists to patients with OADs. The PiKo-6 spirometer may play a role in screening patients suspected of having an OAD in community pharmacies that may benefit from early physician diagnosis and appropriate management.
Resumo:
Omnibus tests of significance in contingency tables use statistics of the chi-square type. When the null is rejected, residual analyses are conducted to identify cells in which observed frequencies differ significantly from expected frequencies. Residual analyses are thus conditioned on a significant omnibus test. Conditional approaches have been shown to substantially alter type I error rates in cases involving t tests conditional on the results of a test of equality of variances, or tests of regression coefficients conditional on the results of tests of heteroscedasticity. We show that residual analyses conditional on a significant omnibus test are also affected by this problem, yielding type I error rates that can be up to 6 times larger than nominal rates, depending on the size of the table and the form of the marginal distributions. We explored several unconditional approaches in search for a method that maintains the nominal type I error rate and found out that a bootstrap correction for multiple testing achieved this goal. The validity of this approach is documented for two-way contingency tables in the contexts of tests of independence, tests of homogeneity, and fitting psychometric functions. Computer code in MATLAB and R to conduct these analyses is provided as Supplementary Material.
Resumo:
Knowledge-based radiation treatment is an emerging concept in radiotherapy. It
mainly refers to the technique that can guide or automate treatment planning in
clinic by learning from prior knowledge. Dierent models are developed to realize
it, one of which is proposed by Yuan et al. at Duke for lung IMRT planning. This
model can automatically determine both beam conguration and optimization ob-
jectives with non-coplanar beams based on patient-specic anatomical information.
Although plans automatically generated by this model demonstrate equivalent or
better dosimetric quality compared to clinical approved plans, its validity and gener-
ality are limited due to the empirical assignment to a coecient called angle spread
constraint dened in the beam eciency index used for beam ranking. To eliminate
these limitations, a systematic study on this coecient is needed to acquire evidences
for its optimal value.
To achieve this purpose, eleven lung cancer patients with complex tumor shape
with non-coplanar beams adopted in clinical approved plans were retrospectively
studied in the frame of the automatic lung IMRT treatment algorithm. The primary
and boost plans used in three patients were treated as dierent cases due to the
dierent target size and shape. A total of 14 lung cases, thus, were re-planned using
the knowledge-based automatic lung IMRT planning algorithm by varying angle
spread constraint from 0 to 1 with increment of 0.2. A modied beam angle eciency
index used for navigate the beam selection was adopted. Great eorts were made to assure the quality of plans associated to every angle spread constraint as good
as possible. Important dosimetric parameters for PTV and OARs, quantitatively
re
ecting the plan quality, were extracted from the DVHs and analyzed as a function
of angle spread constraint for each case. Comparisons of these parameters between
clinical plans and model-based plans were evaluated by two-sampled Students t-tests,
and regression analysis on a composite index built on the percentage errors between
dosimetric parameters in the model-based plans and those in the clinical plans as a
function of angle spread constraint was performed.
Results show that model-based plans generally have equivalent or better quality
than clinical approved plans, qualitatively and quantitatively. All dosimetric param-
eters except those for lungs in the automatically generated plans are statistically
better or comparable to those in the clinical plans. On average, more than 15% re-
duction on conformity index and homogeneity index for PTV and V40, V60 for heart
while an 8% and 3% increase on V5, V20 for lungs, respectively, are observed. The
intra-plan comparison among model-based plans demonstrates that plan quality does
not change much with angle spread constraint larger than 0.4. Further examination
on the variation curve of the composite index as a function of angle spread constraint
shows that 0.6 is the optimal value that can result in statistically the best achievable
plans.
Resumo:
Background: It is important to assess the clinical competence of nursing students to gauge their educational needs. Competence can be measured by self-assessment tools; however, Anema and McCoy (2010) contend that currently available measures should be further psychometrically tested.
Aim: To test the psychometric properties of Nursing Competencies Questionnaire (NCQ) and Self-Efficacy in Clinical Performance (SECP) clinical competence scales.
Method: A non-randomly selected sample of n=248 2nd year nursing students completed NCQ, SECP and demographic questionnaires (June and September 2013). Mokken Scaling Analysis (MSA) was used to investigate structural validity and scale properties; convergent and discriminant validity and reliability were also tested for each scale.
Results: MSA analysis identified that the NCQ is a unidimensional scale with strong scale scalability coefficients Hs =0.581; but limited item rankability HT =0.367. The SECP scale MSA suggested that the scale could be potentially split into two unidimensional scales (SECP28 and SECP7), each with good/reasonable scalablity psychometric properties as summed scales but negligible/very limited scale rankability (SECP28: Hs = 0.55, HT=0.211; SECP7: Hs = 0.61, HT=0.049). Analysis of between cohort differences and NCQ/SECP scores produced evidence of discriminant and convergent validity; good internal reliability was also found: NCQ α = 0.93, SECP28 α = 0.96 and SECP7 α=0.89.
Discussion: In line with previous research further evidence of the NCQ’s reliability and validity was demonstrated. However, as the SECP findings are new and the sample small with reference to Straat and colleagues (2014), the SECP results should be interpreted with caution and verified on a second sample.
Conclusions: Measurement of perceived self-competence could start early in a nursing programme to support students’ development of clinical competence. Further testing of the SECP scale with larger nursing student samples from different programme years is indicated.
References:
Anema, M., G and McCoy, JK. (2010) Competency-Based Nursing Education: Guide to Achieving Outstanding Learner Outcomes. New York: Springer.
Straat, JH., van der Ark, LA and Sijtsma, K. (2014) Minimum Sample Size Requirements for Mokken Scale Analysis Educational and Psychological Measurement 74 (5), 809-822.
Resumo:
Bid opening in e-auction is efficient when a homomorphic secret sharing function is employed to seal the bids and homomorphic secret reconstruction is employed to open the bids. However, this high efficiency is based on an assumption: the bids are valid (e.g., within a special range). An undetected invalid bid can compromise correctness and fairness of the auction. Unfortunately, validity verification of the bids is ignored in the auction schemes employing homomorphic secret sharing (called homomorphic auction in this paper). In this paper, an attack against the homomorphic auction in the absence of bid validity check is presented and a necessary bid validity check mechanism is proposed. Then a batch cryptographic technique is introduced and applied to improve the efficiency of bid validity check.
Resumo:
Process modeling can be regarded as the currently most popular form of conceptual modeling. Research evidence illustrates how process modeling is applied across the different information system life cycle phases for a range of different applications, such as configuration of Enterprise Systems, workflow management, or software development. However, a detailed discussion of critical factors of the quality of process models is still missing. This paper proposes a framework consisting of six quality factors, which is derived from a comprehensive literature review. It then presents in a case study, a utility provider, who had designed various business process models for the selection of an Enterprise System. The paper summarizes potential means of conducting a successful process modeling initiative and evaluates the described modeling approach within the Guidelines of Modeling (GoM) framework. An outlook shows the potential lessons learnt, and concludes with insights to the next phases of this study.