849 resultados para large sample distributions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: While there has been substantial research examining the correlates of comorbid substance abuse in psychotic disorders, it has been difficult to tease apart the relative importance of individual variables. Multivariate analyses are required, in which the relative contributions of risk factors to specific forms of substance misuse are examined, while taking into account the effects of other important correlates. Methods: This study used multivariate correlates of several forms of comorbid substance misuse in a large epidemiological sample of 852 Australians with DSMIII- R-diagnosed psychoses. Results: Multiple substance use was common and equally prevalent in nonaffective and affective psychoses. The most consistent correlate across the substance use disorders was male sex. Younger age groups were more likely to report the use of illegal drugs, while alcohol misuse was not associated with age. Side effects secondary to medication were associated with the misuse of cannabis and multiple substances, but not alcohol. Lower educational attainment was associated with cannabis misuse but not other forms of substance abuse. Conclusion: The profile of substance misuse in psychosis shows clinical and demographic gradients that can inform treatment and preventive research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emissions from airport operations are of significant concern because of their potential impact on local air quality and human health. The currently limited scientific knowledge of aircraft emissions is an important issue worldwide, when considering air pollution associated with airport operation, and this is especially so for ultrafine particles. This limited knowledge is due to scientific complexities associated with measuring aircraft emissions during normal operations on the ground. In particular this type of research has required the development of novel sampling techniques which must take into account aircraft plume dispersion and dilution as well as the various particle dynamics that can affect the measurements of the aircraft engine plume from an operational aircraft. In order to address this scientific problem, a novel mobile emission measurement method called the Plume Capture and Analysis System (PCAS), was developed and tested. The PCAS permits the capture and analysis of aircraft exhaust during ground level operations including landing, taxiing, takeoff and idle. The PCAS uses a sampling bag to temporarily store a sample, providing sufficient time to utilize sensitive but slow instrumental techniques to be employed to measure gas and particle emissions simultaneously and to record detailed particle size distributions. The challenges in relation to the development of the technique include complexities associated with the assessment of the various particle loss and deposition mechanisms which are active during storage in the PCAS. Laboratory based assessment of the method showed that the bag sampling technique can be used to accurately measure particle emissions (e.g. particle number, mass and size distribution) from a moving aircraft or vehicle. Further assessment of the sensitivity of PCAS results to distance from the source and plume concentration was conducted in the airfield with taxiing aircraft. The results showed that the PCAS is a robust method capable of capturing the plume in only 10 seconds. The PCAS is able to account for aircraft plume dispersion and dilution at distances of 60 to 180 meters downwind of moving a aircraft along with particle deposition loss mechanisms during the measurements. Characterization of the plume in terms of particle number, mass (PM2.5), gaseous emissions and particle size distribution takes only 5 minutes allowing large numbers of tests to be completed in a short time. The results were broadly consistent and compared well with the available data. Comprehensive measurements and analyses of the aircraft plumes during various modes of the landing and takeoff (LTO) cycle (e.g. idle, taxi, landing and takeoff) were conducted at Brisbane Airport (BNE). Gaseous (NOx, CO2) emission factors, particle number and mass (PM2.5) emission factors and size distributions were determined for a range of Boeing and Airbus aircraft, as a function of aircraft type and engine thrust level. The scientific complexities including the analysis of the often multimodal particle size distributions to describe the contributions of different particle source processes during the various stages of aircraft operation were addressed through comprehensive data analysis and interpretation. The measurement results were used to develop an inventory of aircraft emissions at BNE, including all modes of the aircraft LTO cycle and ground running procedures (GRP). Measurements of the actual duration of aircraft activity in each mode of operation (time-in-mode) and compiling a comprehensive matrix of gas and particle emission rates as a function of aircraft type and engine thrust level for real world situations was crucial for developing the inventory. The significance of the resulting matrix of emission rates in this study lies in the estimate it provides of the annual particle emissions due to aircraft operations, especially in terms of particle number. In summary, this PhD thesis presents for the first time a comprehensive study of the particle and NOx emission factors and rates along with the particle size distributions from aircraft operations and provides a basis for estimating such emissions at other airports. This is a significant addition to the scientific knowledge in terms of particle emissions from aircraft operations, since the standard particle number emissions rates are not currently available for aircraft activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper will investigate the suitability of existing performance measures under the assumption of a clearly defined benchmark. A range of measures are examined including the Sortino Ratio, the Sharpe Selection ratio (SSR), the Student’s t-test and a decay rate measure. A simulation study is used to assess the power and bias of these measures based on variations in sample size and mean performance of two simulated funds. The Sortino Ratio is found to be the superior performance measure exhibiting more power and less bias than the SSR when the distribution of excess returns are skewed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safety culture in the construction industry is a growing research area. The unique nature of construction industry works – being project-based, varying in size and focus, and relying on a highly transient subcontractor workforce – means that safety culture initiatives cannot be easily translated from other industries. This paper reports on the first study in a three year collaborative industry and university research project focusing on safety culture practices and development in one of Australia’s largest global construction organisations. The first round of a modified Delphi method is reported, and describes the insights gained from 41 safety leaders’ perceptions and understandings of safety culture within the organisation. In-depth, semi-structured interviews were conducted, and will be followed by a quantitative perception survey with the same sample. Participants included Senior Executives, Corporate Managers, Project Managers, Safety Managers and Site Supervisors. Leaders’ definitions and descriptions of safety culture were primarily action-oriented and some confusion was evident due to the sometimes implicit nature of culture in organisations. Leadership was identified as a key factor for positive safety culture in the organisation, and there was an emphasis on leaders demonstrating commitment to safety, and being visible to the project-based workforce. Barriers to safety culture improvement were also identified, with managers raising diverse issues such as the transient subcontractor workforce and the challenge of maintaining safety as a priority in the absence of safety incidents, under high production pressures. This research is unique in that it derived safety culture descriptions from key stakeholders within the organisation, as opposed to imposing traditional conceptualisations of safety culture that are not customised for the organisation or the construction industry more broadly. This study forms the foundation for integrating safety culture theory and practice in the construction industry, and will be extended upon in future studies within the research program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports safety leaders’ perceptions of safety culture in one of Australasia’s largest construction organisations. A modified Delphi method was used including two rounds of data collection. The first round involved 41 semi-structured interviews with safety leaders within the organisation. The second round involved an online quantitative perception survey, with the same sample, aimed at confirming the key themes identified in the interviews. Participants included Senior Executives, Corporate Managers, Project Managers, Safety Managers and Site Supervisors. Interview data was analysed using qualitative thematic analysis, and the survey data was analysed using descriptive statistics. Leaders’ definitions and descriptions of safety culture were primarily action-oriented and some confusion was evident due to the sometimes implicit nature of culture in organisations. Leadership was identified as a key factor for positive safety culture in the organisation, and there was an emphasis on leaders demonstrating commitment to safety, and being visible to the project-based workforce. Barriers to safety culture improvement were also identified, including the subcontractor management issues, pace of change, and reporting requirements. The survey data provided a quantitative confirmation of the interview themes, with some minor discrepancies. The findings highlight that safety culture is a complex construct, which is difficult to define, even for experts in the organisation. Findings on the key factors indicated consistency with the current literature; however the perceptions of barriers to safety culture offer a new understanding in to how safety culture operates in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Anti-Müllerian hormone (AMH) concentration reflects ovarian aging and is argued to be a useful predictor of age at menopause (AMP). It is hypothesized that AMH falling below a critical threshold corresponds to follicle depletion, which results in menopause. With this threshold, theoretical predictions of AMP can be made. Comparisons of such predictions with observed AMP from population studies support the role for AMH as a forecaster of menopause. Objective: The objective of the study was to investigate whether previous relationships between AMH and AMP are valid using a much larger data set. Setting: AMH was measured in 27 563 women attending fertility clinics. Study Design: From these data a model of age-related AMH change was constructed using a robust regression analysis. Data on AMP from subfertile women were obtained from the population-based Prospect-European Prospective Investigation into Cancer and Nutrition (Prospect- EPIC) cohort (n � 2249). By constructing a probability distribution of age at which AMH falls below a critical threshold and fitting this to Prospect-EPIC menopausal age data using maximum likelihood, such a threshold was estimated. Main Outcome: The main outcome was conformity between observed and predicted AMP. Results: To get a distribution of AMH-predicted AMP that fit the Prospect-EPIC data, we found the critical AMH threshold should vary among women in such a way that women with low age-specific AMH would have lower thresholds, whereas women with high age-specific AMH would have higher thresholds (mean 0.075 ng/mL; interquartile range 0.038–0.15 ng/mL). Such a varying AMH threshold for menopause is a novel and biologically plausible finding. AMH became undetectable (�0.2 ng/mL) approximately 5 years before the occurrence of menopause, in line with a previous report. Conclusions: The conformity of the observed and predicted distributions of AMP supports the hypothesis that declining population averages of AMH are associated with menopause, making AMH an excellent candidate biomarker for AMP prediction. Further research will help establish the accuracy of AMH levels to predict AMP within individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Medication remains the cornerstone treatment for mental illness. Cognition is one of the strongest predictors of non-adherence. The aim of this preliminary investigation was to examine the association between the Large Allen Cognitive Level Screen (LACLS) and medication adherence among a small sample of mental health service users to determine whether the LACLS has potential as a screening tool for capacity to manage medication regimens. Method: Demographic and clinical information was collected from a small sample of people who had recently accessed community mental health services. Participants then completed the LACLS and the Medication Adherence Rating Scale (MARS) at a single time point. The strength of association between the LACLS and MARS was examined using Spearman rank-order correlation. Results: A strong positive correlation between the LACLS and medication adherence (r = 0.71, p = 0.01) was evident. No participants reported the use of medication aids despite evidence of impaired cognitive functioning. Conclusion: This investigation has provided the first empirical evidence indicating that the LACLS may have utility as a screening instrument for capacity to manage medication adherence among this population. While promising, this finding should be interpreted with caveats given its preliminary nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We conducted a large-scale association study to identify genes that influence nonfamilial breast cancer risk using a collection of German cases and matched controls and >25,000 single nucleotide polymorphisms located within 16,000 genes. One of the candidate loci identified was located on chromosome 19p13.2 [odds ratio (OR) = 1.5, P = 0.001]. The effect was substantially stronger in the subset of cases with reported family history of breast cancer (OR = 3.4, P = 0.001). The finding was subsequently replicated in two independent collections (combined OR = 1.4, P < 0.001) and was also associated with predisposition to prostate cancer in an independent sample set of prostate cancer cases and matched controls (OR = 1.4, P = 0.002). High-density single nucleotide polymorphism mapping showed that the extent of association spans 20 kb and includes the intercellular adhesion molecule genes ICAM1, ICAM4, and ICAM5. Although genetic variants in ICAM5 showed the strongest association with disease status, ICAM1 is expressed at highest levels in normal and tumor breast tissue. A variant in ICAM5 was also associated with disease progression and prognosis. Because ICAMs are suitable targets for antibodies and small molecules, these findings may not only provide diagnostic and prognostic markers but also new therapeutic opportunities in breast and prostate cancer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As of June 2009, 361 genome-wide association studies (GWAS) had been referenced by the HuGE database. GWAS require DNA from many thousands of individuals, relying on suitable DNA collections. We recently performed a multiple sclerosis (MS) GWAS where a substantial component of the cases (24%) had DNA derived from saliva. Genotyping was done on the Illumina genotyping platform using the Infinium Hap370CNV DUO microarray. Additionally, we genotyped 10 individuals in duplicate using both saliva- and blood-derived DNA. The performance of blood- versus saliva-derived DNA was compared using genotyping call rate, which reflects both the quantity and quality of genotyping per sample and the “GCScore,” an Illumina genotyping quality score, which is a measure of DNA quality. We also compared genotype calls and GCScores for the 10 sample pairs. Call rates were assessed for each sample individually. For the GWAS samples, we compared data according to source of DNA and center of origin. We observed high concordance in genotyping quality and quantity between the paired samples and minimal loss of quality and quantity of DNA in the saliva samples in the large GWAS sample, with the blood samples showing greater variation between centers of origin. This large data set highlights the usefulness of saliva DNA for genotyping, especially in high-density single-nucleotide polymorphism microarray studies such as GWAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Nontuberculous mycobacteria (NTM) are normal inhabitants of a variety of environmental reservoirs including natural and municipal water. The aim of this study was to document the variety of species of NTM in potable water in Brisbane, QLD, with a specific interest in the main pathogens responsible for disease in this region and to explore factors associated with the isolation of NTM. One-litre water samples were collected from 189 routine collection sites in summer and 195 sites in winter. Samples were split, with half decontaminated with CPC 0.005%, then concentrated by filtration and cultured on 7H11 plates in MGIT tubes (winter only). Results Mycobacteria were grown from 40.21% sites in Summer (76/189) and 82.05% sites in winter (160/195). The winter samples yielded the greatest number and variety of mycobacteria as there was a high degree of subculture overgrowth and contamination in summer. Of those samples that did yield mycobacteria in summer, the variety of species differed from those isolated in winter. The inclusion of liquid media increased the yield for some species of NTM. Species that have been documented to cause disease in humans residing in Brisbane that were also found in water include M. gordonae, M. kansasii, M. abscessus, M. chelonae, M. fortuitum complex, M. intracellulare, M. avium complex, M. flavescens, M. interjectum, M. lentiflavum, M. mucogenicum, M. simiae, M. szulgai, M. terrae. M. kansasii was frequently isolated, but M. avium and M. intracellulare (the main pathogens responsible for disease is QLD) were isolated infrequently. Distance of sampling site from treatment plant in summer was associated with isolation of NTM. Pathogenic NTM (defined as those known to cause disease in QLD) were more likely to be identified from sites with narrower diameter pipes, predominantly distribution sample points, and from sites with asbestos cement or modified PVC pipes. Conclusions NTM responsible for human disease can be found in large urban water distribution systems in Australia. Based on our findings, additional point chlorination, maintenance of more constant pressure gradients in the system, and the utilisation of particular pipe materials should be considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.