884 resultados para test data generation
Resumo:
Moffitt’s dual typology of ‘life-course persistent’ and ‘adolescence limited’ offending has received extensive empirical attention, but the extent to which the antisocial behaviour of adolescence limited offenders is constrained to adolescence is relatively under-examined.Using data from the Australian Mater University Study of Pregnancy and its Outcomes, we explore Moffitt’s concept of snares, or those factors that may lead to an adolescent persisting in antisocial behaviour such as drug addiction, educational failure, and contact with the justice system. The Mater University Study of Pregnancy and its Outcomes is a longitudinal study of mother–child dyads from the pre-natal stage to 21 years of age. Findings show that one-third of individuals identified as having an adolescent onset of antisocial behaviour persisted with this antisocial behaviour as young adults. This continuity can, in part, be explained by snares and the research suggests that reducing exposure to snares may lead to less antisocial behaviour in adulthood.
Resumo:
Although Human papillomavirus (HPV) is a common sexually transmitted infection, there is limited knowledge of HPV with ethnic/racial minorities experiencing the greatest disparities. This cross-sectional study used the most recent available data from the California Health Interview Survey to assess disparities in awareness and knowledge of HPV among ethnically/racially diverse women varying in generation status (N = 19,928). Generation status emerged as a significant predictor of HPV awareness across ethnic/racial groups, with 1st generation Asian-Americans and 1st and 2nd generation Latinas reporting the least awareness when compared to same-generation White counterparts. Also, generation status was a significant predictor of HPV knowledge, but only for Asian-Americans. Regardless of ethnicity/race, 1st generation women reported lowest HPV knowledge when compared to 2nd and 3rd generation women. These findings underscore the importance of looking at differences within and across ethnic/racial groups to identify subgroups at greatest risk for poor health outcomes. In particular, we found generation status to be an important yet often overlooked factor in the identification of health disparities.
Resumo:
Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.
Resumo:
Rapid advances in sequencing technologies (Next Generation Sequencing or NGS) have led to a vast increase in the quantity of bioinformatics data available, with this increasing scale presenting enormous challenges to researchers seeking to identify complex interactions. This paper is concerned with the domain of transcriptional regulation, and the use of visualisation to identify relationships between specific regulatory proteins (the transcription factors or TFs) and their associated target genes (TGs). We present preliminary work from an ongoing study which aims to determine the effectiveness of different visual representations and large scale displays in supporting discovery. Following an iterative process of implementation and evaluation, representations were tested by potential users in the bioinformatics domain to determine their efficacy, and to understand better the range of ad hoc practices among bioinformatics literate users. Results from two rounds of small scale user studies are considered with initial findings suggesting that bioinformaticians require richly detailed views of TF data, features to compare TF layouts between organisms quickly, and ways to keep track of interesting data points.
Resumo:
The MFG test is a family-based association test that detects genetic effects contributing to disease in offspring, including offspring allelic effects, maternal allelic effects and MFG incompatibility effects. Like many other family-based association tests, it assumes that the offspring survival and the offspring-parent genotypes are conditionally independent provided the offspring is affected. However, when the putative disease-increasing locus can affect another competing phenotype, for example, offspring viability, the conditional independence assumption fails and these tests could lead to incorrect conclusions regarding the role of the gene in disease. We propose the v-MFG test to adjust for the genetic effects on one phenotype, e.g., viability, when testing the effects of that locus on another phenotype, e.g., disease. Using genotype data from nuclear families containing parents and at least one affected offspring, the v-MFG test models the distribution of family genotypes conditional on offspring phenotypes. It simultaneously estimates genetic effects on two phenotypes, viability and disease. Simulations show that the v-MFG test produces accurate genetic effect estimates on disease as well as on viability under several different scenarios. It generates accurate type-I error rates and provides adequate power with moderate sample sizes to detect genetic effects on disease risk when viability is reduced. We demonstrate the v-MFG test with HLA-DRB1 data from study participants with rheumatoid arthritis (RA) and their parents, we show that the v-MFG test successfully detects an MFG incompatibility effect on RA while simultaneously adjusting for a possible viability loss.
Resumo:
Careful study of various aspects presented in the note reveals basic fallacies in the concept and final conclusions.The Authors claim to have presented a new method of determining C-v. However, the note does not contain a new method. In fact, the method proposed is an attempt to generate settlement vs. time data using only two values of (t,8). The Authors have used a rectangular hyperbola method to determine C-v from the predicated 8- t data. In this context, the title of the paper itself is misleading and questionable. The Authors have compared C-v values predicated with measured values, both of them being the results of the rectangular hyperbola method.
Resumo:
A novel test of recent theories of the origin of optical activity has been designed based on the inclusion of certain alkyl 2-methylhexanoates into urea channels.
Resumo:
Selection criteria and misspecification tests for the intra-cluster correlation structure (ICS) in longitudinal data analysis are considered. In particular, the asymptotical distribution of the correlation information criterion (CIC) is derived and a new method for selecting a working ICS is proposed by standardizing the selection criterion as the p-value. The CIC test is found to be powerful in detecting misspecification of the working ICS structures, while with respect to the working ICS selection, the standardized CIC test is also shown to have satisfactory performance. Some simulation studies and applications to two real longitudinal datasets are made to illustrate how these criteria and tests might be useful.
Resumo:
We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.
Resumo:
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Resumo:
This paper considers the one-sample sign test for data obtained from general ranked set sampling when the number of observations for each rank are not necessarily the same, and proposes a weighted sign test because observations with different ranks are not identically distributed. The optimal weight for each observation is distribution free and only depends on its associated rank. It is shown analytically that (1) the weighted version always improves the Pitman efficiency for all distributions; and (2) the optimal design is to select the median from each ranked set.
Resumo:
The bentiromide test was evaluated using plasma p-aminobenzoic acid as an indirect test of pancreatic insufficiency in young children between 2 months and 4 years of age. To determine the optimal test method, the following were examined: (a) the best dose of bentiromide (15 mg/kg or 30 mg/kg); (b) the optimal sampling time for plasma p-aminobenzoic acid, and; (c) the effect of coadministration of a liquid meal. Sixty-nine children (1.6 ± 1.0 years) were studied, including 34 controls with normal fat absorption and 35 patients (34 with cystic fibrosis) with fat maldigestion due to pancreatic insufficiency. Control and pancreatic insufficient subjects were studied in three age-matched groups: (a) low-dose bentiromide (15 mg/kg) with clear fluids; (b) high-dose bentiromide (30 mg/kg) with clear fluids, and; (c) high-dose bentiromide with a liquid meal. Plasma p-aminobenzoic acid was determined at 0, 30, 60, and 90 minutes then hourly for 6 hours. The dose effect of bentiromide with clear liquids was evaluated. High-dose bentiromide best discriminated control and pancreatic insufficient subjects, due to a higher peak plasma p-aminobenzoic acid level in controls, but poor sensitivity and specificity remained. High-dose bentiromide with a liquid meal produced a delayed increase in plasma p-aminobenzoic acid in the control subjects probably caused by retarded gastric emptying. However, in the pancreatic insufficient subjects, use of a liquid meal resulted in significantly lower plasma p-aminobenzoic acid levels at all time points; plasma p-aminobenzoic acid at 2 and 3 hours completely discriminated between control and pancreatic insufficient patients. Evaluation of the data by area under the time-concentration curve failed to improve test results. In conclusion, the bentiromide test is a simple, clinically useful means of detecting pancreatic insufficiency in young children, but a higher dose administered with a liquid meal is recommended.
Resumo:
The concept of the American Dream was subject to a strong re-evaluation process in the 1960s, as counterculture became a prominent force in American society. A massive generation of young people, moved by the Vietnam War, the hippie movement, and psychedelic experimentation, created substantial social turbulence in their efforts to break out of conventional patterns and to create a new kind of society. This thesis outlines and analyses the concept of the American Dream in popular imagination through three works of new journalism. My primary data consists of Tom Wolfe’s The Electric Kool-Aid Acid Test (1967), Hunter S. Thompson’s Fear and Loathing in Las Vegas: A Savage Journey to the Heart of the American Dream (1971), and Norman Mailer’s Armies of the Night: History as a Novel, the Novel as History (1968). In defining the American Dream, I discuss the history of the concept as well as its manifestations in popular culture. Because of its elusive and amorphous nature, the concept of the American Dream can only be examined in cultural texts that portray the values, sentiments, and customs of a certain era. I have divided the analytical section of my thesis into three parts. In the first part I examine how the authors discuss the American society of their time in relation to ideology, capitalism, and the media. In the second part I focus on the Vietnam War and the controversy it creates in relation to the notions of freedom and patriotism. In the third part I discuss how the authors portray the countercultural visions of a better America that challenged the traditional interpretations of the American Dream. I also discuss the dark side of the new dream: the problems and disillusions that came with the effort to change the world. This thesis is an effort to trace the relocation of the American Dream in the context of the 1960s counterculture and new journalism. It hopes to provide a valuable addition to the cultural history of the sixties and to the effort of conceptualizing the American Dream.
Resumo:
Non-parametric difference tests such as triangle and duo-trio tests traditionally are used to establish differences or similarities between products. However they only supply the researcher with partial answers and often further testing is required to establish the nature, size and direction of differences. This paper looks at the advantages of the difference from control (DFC) test (also known as degree of difference test) and discusses appropriate applications of the test. The scope and principle of the test, panel composition and analysis of results are presented with the aid of suitable examples. Two of the major uses of the DFC test are in quality control and shelf-life testing. The role DFC takes in these areas and the use of other tests to complement the testing is discussed. Controls or standards are important in both these areas and the use of standard products, mental and written standards and blind controls are highlighted. The DFC test has applications in products where the duo-trio and triangle tests cannot be used because of the normal heterogeneity of the product. While the DFC test is a simple difference test it can be structured to give the researcher more valuable data and scope to make informed decisions about their product.
Resumo:
This report summarises the findings of a case study on Queensland’s New Generation Rollingstock (NGR) Project carried out as part of SBEnrc Project 2.34 Driving Whole-of-life Efficiencies through BIM and Procurement. This case study is one of three exemplar projects studied in order to leverage academic research in defining indicators for measuring tangible and intangible benefits of Building Information Modelling (BIM) across a project’s life-cycle in infrastructure and buildings. The NGR is an AUD 4.4 billion project carried out under an Availability Payment Public-Private Partnership (PPP) between the Queensland Government and the Bomabardier-led QTECTIC consortium comprising Bombardier Transportation, John Laing, ITOCHU Corporation and Aberdeen Infrastructure Investments. BIM has been deployed on the project from conceptual stages to drive both design and the currently ongoing construction at the Wulkuraka Project Site. This case study sourced information from a series of semi-structured interviews covering a cross-section of key stakeholders on the project. The present research identified 25 benefits gained from implementing BIM processes and tools. Some of the most prominent benefits were those leading to improved outcomes and higher customer satisfaction such as improved communications, data and information management, and coordination. There were also a number of expected benefits for future phases such as: • Improved decision making through the use of BIM for managing assets • Improved models through BIM maturity • Better utilisation of BIM for procurement on similar future projects • New capacity to specify the content of BIM models within contracts There were also three benefits that were expected to have been achieved but were not realised on the NGR project. These were higher construction information quality levels, better alignment in design teams as well as project teams, and capability improvements in measuring the impact of BIM on construction safety. This report includes individual profiles describing each benefit as well as the tools and processes that enabled them. Four key BIM metrics were found to be currently in use and six more were identified as potential metrics for the future. This case study also provides insights into challenges associated with implementing BIM on a project of the size and complexity of the NGR. Procurement aspects and lessons learned for managers are also highlighted, including a list of recommendations for developing a framework to assess the benefits of BIM across the project life-cycle.