999 resultados para Chi Square.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Migraine with aura (MA) is a subtype of typical migraine. Migraine with aura (MA) also encompasses a rare severe subtype Familial Hemiplegic Migraine (FHM) with several known genetic loci. The type 2 FHM (FHM-2) susceptibility locus maps to chromosome 1q23 and mutations in the ATP1A2 gene at this site have recently been implicated. We have previously provided evidence of linkage of typical migraine (predominantly MA) to microsatellite markers on chromosome 1, in the 1q31 and 1q23 regions. In this study, we have undertaken a large genomic investigation involving candidate genes that lie within the chromosome 1q23 and 1q31 regions using an association analysis approach. Methods We have genotyped a large population of case-controls (243 unrelated Caucasian migraineurs versus 243 controls) examining a set of 5 single nucleotide polymorphisms (SNPs) and the Fas Ligand dinucleotide repeat marker, located within the chromosome 1q23 and 1q31 regions. Results Several genes have been studied including membrane protein (ATP 1 subtype A4 and FasL), cytoplasmic glycoprotein (CASQ 1) genes and potassium (KCN J9 and KCN J10) and calcium (CACNA1E) channel genes in 243 migraineurs (including 85% MA and 15% of migraine without aura (MO)) and 243 matched controls. After correction for multiple testing, chi-square results showed non-significant P values (P > 0.008) across all SNPs (and a CA repeat) tested in these different genes, however results with the KCN J10 marker gave interesting results (P = 0.02) that may be worth exploring further in other populations. Conclusion These results do not show a significant role for the tested candidate gene variants and also do not support the hypothesis that a common chromosome 1 defective gene influences both FHM and the more common forms of migraine.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: To investigate the role of the dopamine receptor genes, DRD1, DRD3, and DRD5 in the pathogenesis of migraine. BACKGROUND: Migraine is a chronic debilitating disorder affecting approximately 12% of the white population. The disease shows strong familial aggregation and presumably has a genetic basis, but at present, the type and number of genes involved is unclear. The study of candidate genes can prove useful in the identification of genes involved in complex diseases such as migraine, especially if the contribution of the gene to phenotypic expression is minor. Genes coding for proteins involved in dopamine metabolism have been implicated in a number of neurologic conditions and may play a contributory role in migraine. Hence, genes that code for enzymes and receptors modulating dopaminergic activity are good candidates for investigation of the molecular genetic basis of migraine. METHODS: We tested 275 migraineurs and 275 age- and sex-matched individuals free of migraine. Genotypic results were determined by restriction endonuclease digestion of polymerase chain reaction products to detect DRD1 and DRD3 alleles and by Genescan analysis after polymerase chain reaction using fluorescently labelled oligonucleotide primers for the DRD5 marker. RESULTS: Results of chi-square statistical analyses indicated that the allele distribution for migraine cases compared to controls was not significantly different for any of the three tested gene markers (chi2 = 0.1, P =.74 for DRD1; chi2 = 1.8, P =.18 for DRD3; and chi2 = 20.3, P =.08 for DRD5). CONCLUSIONS: These findings offer no evidence for allelic association between the tested dopamine receptor gene polymorphisms and the more prevalent forms of migraine and, therefore, do not support a role for these genes in the pathogenesis of the disorder.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Certain genes from the glutathione S-transferase superfamily have been associated with several cancer types. It was the objective of this study to determine whether alleles of the glutathione S-transferase zeta 1 (GSTZ1) gene are associated with the development of sporadic breast cancer. Methods DNA samples obtained from a Caucasian population affected by breast cancer and a control population, matched for age and ethnicity, were genotyped for a polymorphism of the GSTZ1 gene. After PCR, alleles were identified by restriction enzyme digestion and results analysed by chi-square and CLUMP analysis. Results Chi-squared analysis gave a χ2 value of 4.77 (three degrees of freedom) with P = 0.19, and CLUMP analysis gave a T1 value of 9.02 with P = 0.45 for genotype frequencies and a T1 value of 4.77 with P = 0.19 for allele frequencies. Conclusion Statistical analysis indicates that there is no association of the GSTZ1 variant and hence the gene does not appear to play a significant role in the development of sporadic breast cancer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

1. The low density lipoprotein receptor is an important regulator of serum cholesterol which may have implications for the development of both hypertension and obesity. In this study, genotypes for a low density lipoprotein receptor gene (LDLR) dinucleotide polymorphism were determined in both lean and obese normotensive populations. 2. In previous cross-sectional association studies an ApaLI and a HincII polymorphism for LDLR were shown to be associated with obesity in essential hypertensives. However, these polymorphisms did not show an association with obesity in normotensives. 3. In contrast, this study reports that preliminary results for an LDLR microsatellite marker, located more towards the 3' end of the gene, show a significant association with obesity in the normotensive population studied. These results indicate that LDLR could play an important role in the development of obesity, which might be independent of hypertension.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background and significance: Nurses' job dissatisfaction is associated with negative nursing and patient outcomes. One of the most powerful reasons for nurses to stay in an organisation is satisfaction with leadership. However, nurses are frequently promoted to leadership positions without appropriate preparation for the role. Although a number of leadership programs have been described, none have been tested for effectiveness, using a randomised control trial methodology. Aims: The aims of this research were to develop an evidence based leadership program and to test its effectiveness on nurse unit managers' (NUMs') and nursing staff's (NS's) job satisfaction, and on the leader behaviour scores of nurse unit managers. Methods: First, the study used a comprehensive literature review to examine the evidence on job satisfaction, leadership and front-line manager competencies. From this evidence a summary of leadership practices was developed to construct a two component leadership model. The components of this model were then combined with the evidence distilled from previous leadership development programs to develop a Leadership Development Program (LDP). This evidence integrated the program's design, its contents, teaching strategies and learning environment. Central to the LDP were the evidence-based leadership practices associated with increasing nurses' job satisfaction. A randomised controlled trial (RCT) design was employed for this research to test the effectiveness of the LDP. A RCT is one of the most powerful tools of research and the use of this method makes this study unique, as a RCT has never been used previously to evaluate any leadership program for front-line nurse managers. Thirty-nine consenting nurse unit managers from a large tertiary hospital were randomly allocated to receive either the leadership program or only the program's written information about leadership. Demographic baseline data were collected from participants in the NUM groups and the nursing staff who reported to them. Validated questionnaires measuring job satisfaction and leader behaviours were administered at baseline, at three months after the commencement of the intervention and at six months after the commencement of the intervention, to the nurse unit managers and to the NS. Independent and paired t-tests were used to analyse continuous outcome variables and Chi Square tests were used for categorical data. Results: The study found that the nurse unit managers' overall job satisfaction score was higher at 3-months (p = 0.016) and at 6-months p = 0.027) post commencement of the intervention in the intervention group compared with the control group. Similarly, at 3-months testing, mean scores in the intervention group were higher in five of the six "positive" sub-categories of the leader behaviour scale when compared to the control group. There was a significant difference in one sub-category; effectiveness, p = 0.015. No differences were observed in leadership behaviour scores between groups by 6-months post commencement of the intervention. Over time, at three month and six month testing there were significant increases in four transformational leader behaviour scores and in one positive transactional leader behaviour scores in the intervention group. Over time at 3-month testing, there were significant increases in the three leader behaviour outcome scores, however at 6-months testing; only one of these leader behaviour outcome scores remained significantly increased. Job satisfaction scores were not significantly increased between the NS groups at three months and at six months post commencement of the intervention. However, over time within the intervention group at 6-month testing there was a significant increase in job satisfaction scores of NS. There were no significant increases in NUM leader behaviour scores in the intervention group, as rated by the nursing staff who reported to them. Over time, at 3-month testing, NS rated nurse unit managers' leader behaviour scores significantly lower in two leader behaviours and two leader behaviour outcome scores. At 6-month testing, over time, one leader behaviour score was rated significantly lower and the nontransactional leader behaviour was rated significantly higher. Discussion: The study represents the first attempt to test the effectiveness of a leadership development program (LDP) for nurse unit managers using a RCT. The program's design, contents, teaching strategies and learning environment were based on a summary of the literature. The overall improvement in role satisfaction was sustained for at least 6-months post intervention. The study's results may reflect the program's evidence-based approach to developing the LDP, which increased the nurse unit managers' confidence in their role and thereby their job satisfaction. Two other factors possibly contributed to nurse unit managers' increased job satisfaction scores. These are: the program's teaching strategies, which included the involvement of the executive nursing team of the hospital, and the fact that the LDP provided recognition of the importance of the NUM role within the hospital. Consequently, participating in the program may have led to nurse unit managers feeling valued and rewarded for their service; hence more satisfied. Leadership behaviours remaining unchanged between groups at the 6 months data collection time may relate to the LDP needing to be conducted for a longer time period. This is suggested because within the intervention group, over time, at 3 and 6 months there were significant increases in self-reported leader behaviours. The lack of significant changes in leader behaviour scores between groups may equally signify that leader behaviours require different interventions to achieve change. Nursing staff results suggest that the LDP's design needs to consider involving NS in the program's aims and progress from the outset. It is also possible that by including regular feedback from NS to the nurse unit managers during the LDP that NS's job satisfaction and their perception of nurse unit managers' leader behaviours may alter. Conclusion/Implications: This study highlights the value of providing an evidence-based leadership program to nurse unit managers to increase their job satisfaction. The evidence based leadership program increased job satisfaction but its effect on leadership behaviour was only seen over time. Further research is required to test interventions which attempt to change leader behaviours. Also further research on NS' job satisfaction is required to test the indirect effects of LDP on NS whose nurse unit managers participate in LDPs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background & Aims: Access to sufficient amounts of safe and culturally-acceptable foods is a fundamental human right. Food security exists when all people, at all times, have physical, social, and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life. Food insecurity therefore occurs when the availability or access to sufficient amounts of nutritionally-adequate, culturally-appropriate and safe foods, or, the ability to acquire such foods in socially-acceptable ways, is limited. Food insecurity may result in significant adverse effects for the individual and these outcomes may vary between adults and children. Among adults, food insecurity may be associated with overweight or obesity, poorer self-rated general health, depression, increased health-care utilisation and dietary intakes less consistent with national recommendations. Among children, food insecurity may result in poorer self or parent-reported general health, behavioural problems, lower levels of academic achievement and poor social outcomes. The majority of research investigating the potential correlates of food insecurity has been undertaken in the United States (US), where regular national screening for food insecurity is undertaken using a comprehensive multi-item measurement. In Australia, screening for food insecurity takes place on a three yearly basis via the use of a crude, single-item included in the National Health Survey (NHS). This measure has been shown to underestimate the prevalence of food insecurity by 5%. From 1995 – 2004, the prevalence of food insecurity among the Australian population remained stable at 5%. Due to the perceived low prevalence of this issue, screening for food insecurity was not undertaken in the most recent NHS. Furthermore, there are few Australian studies investigating the potential determinants of food insecurity and none investigating potential outcomes among adults and children. This study aimed to examine these issues by a) investigating the prevalence of food insecurity among households residing in disadvantaged urban areas and comparing prevalence rates estimated by the more comprehensive 18-item and 6-item United States Department of Agriculture (USDA) Food Security Survey Module (FSSM) to those estimated by the current single-item measure used for surveillance in Australia and b) investigating the potential determinants and outcomes of food insecurity, Methods: A comprehensive literature review was undertaken to investigate the potential determinants and consequences of food insecurity among developed countries. This was followed by a cross-sectional study in which 1000 households from the most disadvantaged 5% of Brisbane areas were sampled and data collected via mail-based survey (final response rate = 53%, n = 505). Data were collected for food security status, sociodemographic characteristics (household income, education, age, gender, employment status, housing tenure and living arrangements), fruit and vegetable intakes, meat and take-away consumption, presence of depressive symptoms, presence of chronic disease and body mass index (BMI) among adults. Among children, data pertaining to BMI, parent-reported general health, days away from school and activities and behavioural problems were collected. Rasch analysis was used to investigate the psychometric properties of the 18-, 10- and 6-item adaptations of the USDA-FSSM, and McNemar's test was used to investigate the difference in the prevalence of food insecurity as measured by these three adaptations compared to the current single-item measure used in Australia. Chi square and logistic regression were used to investigate the differences in dietary and health outcomes among adults and health and behavioural outcomes among children. Results were adjusted for equivalised household income and, where necessary, for indigenous status, education and family type. Results: Overall, 25% of households in these urbanised-disadvantaged areas reported experiencing food insecurity; this increased to 34% when only households with children were analysed. The current reliance on a single-item measure to screen for food insecurity may underestimate the true burden among the Australian population, as this measure was shown to significantly underestimate the prevalence of food insecurity by five percentage points. Internationally, major potential determinants of food insecurity included poverty and indicators of poverty, such as low-income, unemployment and lower levels of education. Ethnicity, age, transportation and cooking and financial skills were also found to be potential determinants of food insecurity. Among Australian adults in disadvantaged urban areas, food insecurity was associated with a three-fold increase in experiencing poorer self-rated general health and a two-to-five-fold increase in the risk of depression. Furthermore, adults from food insecure households were twoto- three times more likely to have seen a general practitioner and/or been admitted to hospital within the previous six months, compared to their food secure counterparts. Weight status and intakes of fruits, vegetables and meat were not associated with food insecurity. Among Australian households with children, those in the lowest tertile were over 16 times more likely to experience food insecurity compared to those in the highest tertile for income. After adjustment for equivalised household income, children from food insecure households were three times more likely to have missed days away from school or other activities. Furthermore, children from food insecure households displayed a two-fold increase in atypical emotions and behavioural difficulties. Conclusions: Food insecurity is an important public health issue and may contribute to the burden on the health care system through its associations with depression and increased health care utilisation among adults and behavioural and emotional problems among children. Current efforts to monitor food insecurity in Australia do not occur frequently and use a tool that may underestimate the prevalence of food insecurity. Efforts should be made to improve the regularity of screening for food insecurity via the use of a more accurate screening measure. Most of the current strategies that aim to alleviate food insecurity do not sufficiently address the issue of insufficient financial resources for acquiring food; a factor which is an important determinant of food insecurity. Programs to address this issue should be developed in collaboration with groups at higher risk of developing food insecurity and should incorporate strategies to address the issue of low income as a barrier to food acquisition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: Modern series from high-volume esophageal centers report an approximate 40% 5-year survival in patients treated with curative intent and postoperative mortality rates of less than 4%. An objective analysis of factors that underpin current benchmarks within high-volume centers has not been performed. Methods: Three time periods were studied, 1990 to 1998 (period 1), 1999 to 2003 (period 2), and 2004 to 2008 (period 3), in which 471, 254, and 342 patients, respectively, with esophageal cancer were treated with curative intent. All data were prospectively recorded, and staging, pathology, treatment, operative, and oncologic outcomes were compared. Results: Five-year disease-specific survival was 28%, 35%, and 44%, and in-hospital postoperative mortality was 6.7%, 4.4%, and 1.7% for periods 1 to 3, respectively (P < .001). Period 3, compared with periods 1 and 2, respectively, was associated with significantly (P < .001) more early tumors (17% vs 4% and 6%), higher nodal yields (median 22 vs 11 and 18), and a higher R0 rate in surgically treated patients (81% vs 73% and 75%). The use of multimodal therapy increased (P < .05) across time periods. By multivariate analysis, age, T stage, N stage, vascular invasion, R status, and time period were significantly (P < .0001) associated with outcome. Conclusions: Improved survival with localized esophageal cancer in the modern era may reflect an increase of early tumors and optimized staging. Important surgical and pathologic standards, including a higher R0 resection rate and nodal yields, and lower postoperative mortality, were also observed. Copyright © 2012 by The American Association for Thoracic Surgery.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose The role played by the innate immune system in determining survival from non-small-cell lung cancer (NSCLC) is unclear. The aim of this study was to investigate the prognostic significance of macrophage and mast-cell infiltration in NSCLC. Methods We used immunohistochemistry to identify tryptase+ mast cells and CD68+ macrophages in the tumor stroma and tumor islets in 175 patients with surgically resected NSCLC. Results Macrophages were detected in both the tumor stroma and islets in all patients. Mast cells were detected in the stroma and islets in 99.4% and 68.5% of patients, respectively. Using multivariate Cox proportional hazards analysis, increasing tumor islet macrophage density (P < .001) and tumor islet/stromal macrophage ratio (P < .001) emerged as favorable independent prognostic indicators. In contrast, increasing stromal macrophage density was an independent predictor of reduced survival (P = .001). The presence of tumor islet mast cells (P = .018) and increasing islet/stromal mast-cell ratio (P = .032) were also favorable independent prognostic indicators. Macrophage islet density showed the strongest effect: 5-year survival was 52.9% in patients with an islet macrophage density greater than the median versus 7.7% when less than the median (P < .0001). In the same groups, respectively, median survival was 2,244 versus 334 days (P < .0001). Patients with a high islet macrophage density but incomplete resection survived markedly longer than patients with a low islet macrophage density but complete resection. Conclusion The tumor islet CD68+ macrophage density is a powerful independent predictor of survival from surgically resected NSCLC. The biologic explanation for this and its implications for the use of adjunctive treatment requires further study. © 2005 by American Society of Clinical Oncology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study describes the first aid used and clinical outcomes of all patients who presented to the Royal Children's Hospital, Brisbane, Australia in 2005 with an acute burn injury. A retrospective audit was performed with the charts of 459 patients and information concerning burn injury, first-aid treatment, and clinical outcomes was collected. First aid was used on 86.1% of patients, with 8.7% receiving no first aid and unknown treatment in 5.2% of cases. A majority of patients had cold water as first aid (80.2%), however, only 12.1% applied the cold water for the recommended 20 minutes or longer. Recommended first aid (cold water for >or=20 minutes) was associated with significantly reduced reepithelialization time for children with contact injuries (P=.011). Superficial depth burns were significantly more likely to be associated with the use of recommended first aid (P=.03). Suboptimal treatment was more common for children younger than 3.5 years (P<.001) and for children with friction burns. This report is one of the few publications to relate first-aid treatment to clinical outcomes. Some positive clinical outcomes were associated with recommended first-aid use; however, wound outcomes were more strongly associated with burn depth and mechanism of injury. There is also a need for more public awareness of recommended first-aid treatment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this study is to estimate the ratio of male and female participants in Sports and Exercise Medicine research. Original research articles published in three major Sports and Exercise Medicine journals (Medicine and Science in Sport and Exercise, British Journal of Sports Medicine and American Journal of Sports Medicine) over a three year period were examined. Each article was screened to determine the following: total number of participants, the number of female participants and the number of male participants. The percentage of females and males per article in each of the journals was also calculated. Cross tabulations and Chi square analysis were used to compare the gender representation of participants within each of the journals. Data were extracted from 1, 382 articles involving a total of 6, 076, 705 participants. 2, 366, 998 (39%) participants were female and 3, 709, 707 (61%) male. The average percentage of female participants per article across the journals ranged from 35-37%. Females were significantly under-represented across all of the journals (X2 =23566, df=2, p<0.00001). There were no significant differences between the three journals. In conclusion, Sports and Exercise Medicine practitioners should be cognisant of sexual dimorphism and gender disparity in the current literature.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective The present study aimed to develop accelerometer cut points to classify physical activities (PA) by intensity in preschoolers and to investigate discrepancies in PA levels when applying various accelerometer cut points. Methods To calibrate the accelerometer, 18 preschoolers (5.8 +/- 0.4 years) performed eleven structured activities and one free play session while wearing a GT1M ActiGraph accelerometer using 15 s epochs. The structured activities were chosen based on the direct observation system Children's Activity Rating Scale (CARS) while the criterion measure of PA intensity during free play was provided using a second-by-second observation protocol (modified CARS). Receiver Operating Characteristic (ROC) curve analyses were used to determine the accelerometer cut points. To examine the classification differences, accelerometer data of four consecutive days from 114 preschoolers (5.5 +/- 0.3 years) were classified by intensity according to previously published and the newly developed accelerometer cut points. Differences in predicted PA levels were evaluated using repeated measures ANOVA and Chi Square test. Results Cut points were identified at 373 counts/15 s for light (sensitivity: 86%; specificity: 91%; Area under ROC curve: 0.95), 585 counts/15 s for moderate (87%; 82%; 0.91) and 881 counts/15 s for vigorous PA (88%; 91%; 0.94). Further, applying various accelerometer cut points to the same data resulted in statistically and biologically significant differences in PA. Conclusions Accelerometer cut points were developed with good discriminatory power for differentiating between PA levels in preschoolers and the choice of accelerometer cut points can result in large discrepancies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose the use of optical flow information as a method for detecting and describing changes in the environment, from the perspective of a mobile camera. We analyze the characteristics of the optical flow signal and demonstrate how robust flow vectors can be generated and used for the detection of depth discontinuities and appearance changes at key locations. To successfully achieve this task, a full discussion on camera positioning, distortion compensation, noise filtering, and parameter estimation is presented. We then extract statistical attributes from the flow signal to describe the location of the scene changes. We also employ clustering and dominant shape of vectors to increase the descriptiveness. Once a database of nodes (where a node is a detected scene change) and their corresponding flow features is created, matching can be performed whenever nodes are encountered, such that topological localization can be achieved. We retrieve the most likely node according to the Mahalanobis and Chi-square distances between the current frame and the database. The results illustrate the applicability of the technique for detecting and describing scene changes in diverse lighting conditions, considering indoor and outdoor environments and different robot platforms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Some patients visit a hospital’s emergency department (ED) for reasons other than an urgent medical condition. There is evidence that this practice may differ among patients from different backgrounds. The objective of this study was to examine the reasons why patients from a non-English speaking background (NESB) and patients with an English speaking background but not born in Australia (ESB-NBA) visit the ED, as compared to patients from English-speaking backgrounds but born in Australia (ESB-BA). Methods A cross-sectional survey was conducted at the ED of a tertiary hospital in metropolitan Brisbane, Queensland, Australia. Over a four-month period patients who were assigned an Australasian Triage Scale score of 3, 4 or 5 were surveyed. Pearson chi-square test and multivariate logistic regression analyses were performed to examine the differences between the ESB and NESB patients’ reported reasons for attending the ED. Results A total of 828 patients participated in this study. Compared to ESB-BA patients NESB patients were less likely to consider contacting a general practitioner (GP) before attending the ED (Odds Ratios (OR) 0.6 (95% Confidence Interval (CI) 0.4–0.8, p < .05) While ESB-NBA were more likely to consider contacting a GP 1.7 (1.1–2.5, p < .05). Both the NESB patients and the ESB-NBA patients were far more likely than ESB-BA patients to report that they had visited the ED either because they do not have a GP (OR 7.9, 95% CI 4.7–13.4, p < .001) and 2.2 (95% CI 1.1–4.4, p < .05) respectively and less likely to think that the ED could deal with their problem better than a GP(OR 0.5 (95% CI 0.3–0.8, p < .05) and 0.7 (0.3–0.9, p < .05) respectively. The NESB patients also thought it would take too long to make an appointment to consult a GP (OR 6.2, 95% CI 3.7–10.4, p < 0.001). Conclusions NESB patients were the least likely to consider contacting a GP before attending hospital EDs. Educational interventions may help direct NESB people to the appropriate health services and therefore reduce the burden on tertiary hospitals ED.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Business processes are prone to continuous and unexpected changes. Process workers may start executing a process differently in order to adjust to changes in workload, season, guidelines or regulations for example. Early detection of business process changes based on their event logs – also known as business process drift detection – enables analysts to identify and act upon changes that may otherwise affect process performance. Previous methods for business process drift detection are based on an exploration of a potentially large feature space and in some cases they require users to manually identify the specific features that characterize the drift. Depending on the explored feature set, these methods may miss certain types of changes. This paper proposes a fully automated and statistically grounded method for detecting process drift. The core idea is to perform statistical tests over the distributions of runs observed in two consecutive time windows. By adaptively sizing the window, the method strikes a trade-off between classification accuracy and drift detection delay. A validation on synthetic and real-life logs shows that the method accurately detects typical change patterns and scales up to the extent it is applicable for online drift detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For design-build (DB) projects, owners normally use lump sum and Guaranteed Maximum Price (GMP) as the major contract payment provisions. However, there was a lack of empirical studies to compare the project performance within different contract types and investigate how different project characteristics affect the owners’ selection of contract arrangement. Project information from Design-build Institute of America (DBIA) database was collected to reveal the statistical relationship between different project characteristics and contract types and to compare project performance between lump sum and GMP contract. The results show that lump sum is still the most frequently used contract method for DB projects, especially in the public sector. However, projects using GMP contract are more likely to have less schedule delay and cost overrun as compared to those with lump sum contract. The chi-square tests of cross tabulations reveal that project type, owner type, and procurement method affect the selection of contract types significantly. Civil infrastructure rather than industrial engineering project tends to use lump sum more frequently; and qualification-oriented contractor selection process resorts to GMP more often compared with cost-oriented process. The findings of this research contribute to the current body of knowledge concerning the effect of associated project characteristics on contract type selection. Overall, the results of this study provide empirical evidence from real DB projects that can be used by owners to select appropriate contract types and eventually improve future project performance.