78 resultados para large sample distributions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Integrated exposure to polycyclic aromatic hydrocarbons (PAHs) can be assessed through monitoring of urinary mono-hydroxylated PAHs (OH-PAHs). The aim of this study was to provide the first assessment of exposure to PAHs in a large sample of the population in Queensland, Australia including exposure to infant (0-4. years). De-identified urine specimens, obtained from a pathology laboratory, were stratified by age and sex, and pooled (n. =. 24 pools of 100) and OH-PAHs were measured by gas chromatography-isotope dilution-tandem mass spectrometry. Geometric mean (GM) concentrations ranged from 30. ng/L (4-hydroxyphenanthrene) to 9221. ng/L (1-naphthol). GM of 1-hydroxypyrene, the most commonly used PAH exposure biomarker, was 142. ng/L. The concentrations of OH-PAHs found in this study are consistent with those in developed countries and lower than those in developing countries. We observed no association between sex and OH-PAH concentrations. However, we observed lower urinary concentrations of all OH-PAHs in samples from infants (0-4. years), children (5-14. years) and the elderly (>. 60. year old) compared with samples from other age groups (15-29, 30-44 and 45-59. years) which may be attributed to age-dependent behaviour-specific exposure sources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study investigates the relationship between per capita carbon dioxide (CO2) emissions and per capita GDP in Australia, while controlling for technological state as measured by multifactor productivity and export of black coal. Although technological progress seems to play a critical role in achieving long term goals of CO2 reduction and economic growth, empirical studies have often considered time trend to proxy technological change. However, as discoveries and diffusion of new technologies may not progress smoothly with time, the assumption of a deterministic technological progress may be incorrect in the long run. The use of multifactor productivity as a measure of technological state, therefore, overcomes the limitations and provides practical policy directions. This study uses recently developed bound-testing approach, which is complemented by Johansen- Juselius maximum likelihood approach and a reasonably large sample size to investigate the cointegration relationship. Both of the techniques suggest that cointegration relationship exists among the variables. The long-run and short-run coefficients of CO2 emissions function is estimated using ARDL approach. The empirical findings in the study show evidence of the existence of Environmental Kuznets Curve type relationship for per capita CO2 emissions in the Australian context. The technology as measured by the multifactor productivity, however, is not found as an influencing variable in emissionsincome trajectory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The extent of exothermicity associated with the construction of large-volume methacrylate monolithic columns has somewhat obstructed the realisation of large-scale rapid biomolecule purification especially for plasmid-based products which have proven to herald future trends in biotechnology. A novel synthesis technique via a heat expulsion mechanism was employed to prepare a 40 mL methacrylate monolith with a homogeneous radial pore structure along its thickness. Radial temperature gradient was recorded to be only 1.8 °C. Maximum radial temperature recorded at the centre of the monolith was 62.3 °C, which was only 2.3 °C higher than the actual polymerisation temperature. Pore characterisation of the monolithic polymer showed unimodal pore size distributions at different radial positions with an identical modal pore size of 400 nm. Chromatographic characterisation of the polymer after functionalisation with amino groups displayed a persistent dynamic binding capacity of 15.5 mg of plasmid DNA/mL. The maximum pressure drop recorded was only 0.12 MPa at a flow rate of 10 mL/min. The polymer demonstrated rapid separation ability by fractionating Escherichia coli DH5α-pUC19 clarified lysate in only 3 min after loading. The plasmid sample collected after the fast purification process was tested to be a homogeneous supercoiled plasmid with DNA electrophoresis and restriction analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phase-type distributions represent the time to absorption for a finite state Markov chain in continuous time, generalising the exponential distribution and providing a flexible and useful modelling tool. We present a new reversible jump Markov chain Monte Carlo scheme for performing a fully Bayesian analysis of the popular Coxian subclass of phase-type models; the convenient Coxian representation involves fewer parameters than a more general phase-type model. The key novelty of our approach is that we model covariate dependence in the mean whilst using the Coxian phase-type model as a very general residual distribution. Such incorporation of covariates into the model has not previously been attempted in the Bayesian literature. A further novelty is that we also propose a reversible jump scheme for investigating structural changes to the model brought about by the introduction of Erlang phases. Our approach addresses more questions of inference than previous Bayesian treatments of this model and is automatic in nature. We analyse an example dataset comprising lengths of hospital stays of a sample of patients collected from two Australian hospitals to produce a model for a patient's expected length of stay which incorporates the effects of several covariates. This leads to interesting conclusions about what contributes to length of hospital stay with implications for hospital planning. We compare our results with an alternative classical analysis of these data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: While there has been substantial research examining the correlates of comorbid substance abuse in psychotic disorders, it has been difficult to tease apart the relative importance of individual variables. Multivariate analyses are required, in which the relative contributions of risk factors to specific forms of substance misuse are examined, while taking into account the effects of other important correlates. Methods: This study used multivariate correlates of several forms of comorbid substance misuse in a large epidemiological sample of 852 Australians with DSMIII- R-diagnosed psychoses. Results: Multiple substance use was common and equally prevalent in nonaffective and affective psychoses. The most consistent correlate across the substance use disorders was male sex. Younger age groups were more likely to report the use of illegal drugs, while alcohol misuse was not associated with age. Side effects secondary to medication were associated with the misuse of cannabis and multiple substances, but not alcohol. Lower educational attainment was associated with cannabis misuse but not other forms of substance abuse. Conclusion: The profile of substance misuse in psychosis shows clinical and demographic gradients that can inform treatment and preventive research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emissions from airport operations are of significant concern because of their potential impact on local air quality and human health. The currently limited scientific knowledge of aircraft emissions is an important issue worldwide, when considering air pollution associated with airport operation, and this is especially so for ultrafine particles. This limited knowledge is due to scientific complexities associated with measuring aircraft emissions during normal operations on the ground. In particular this type of research has required the development of novel sampling techniques which must take into account aircraft plume dispersion and dilution as well as the various particle dynamics that can affect the measurements of the aircraft engine plume from an operational aircraft. In order to address this scientific problem, a novel mobile emission measurement method called the Plume Capture and Analysis System (PCAS), was developed and tested. The PCAS permits the capture and analysis of aircraft exhaust during ground level operations including landing, taxiing, takeoff and idle. The PCAS uses a sampling bag to temporarily store a sample, providing sufficient time to utilize sensitive but slow instrumental techniques to be employed to measure gas and particle emissions simultaneously and to record detailed particle size distributions. The challenges in relation to the development of the technique include complexities associated with the assessment of the various particle loss and deposition mechanisms which are active during storage in the PCAS. Laboratory based assessment of the method showed that the bag sampling technique can be used to accurately measure particle emissions (e.g. particle number, mass and size distribution) from a moving aircraft or vehicle. Further assessment of the sensitivity of PCAS results to distance from the source and plume concentration was conducted in the airfield with taxiing aircraft. The results showed that the PCAS is a robust method capable of capturing the plume in only 10 seconds. The PCAS is able to account for aircraft plume dispersion and dilution at distances of 60 to 180 meters downwind of moving a aircraft along with particle deposition loss mechanisms during the measurements. Characterization of the plume in terms of particle number, mass (PM2.5), gaseous emissions and particle size distribution takes only 5 minutes allowing large numbers of tests to be completed in a short time. The results were broadly consistent and compared well with the available data. Comprehensive measurements and analyses of the aircraft plumes during various modes of the landing and takeoff (LTO) cycle (e.g. idle, taxi, landing and takeoff) were conducted at Brisbane Airport (BNE). Gaseous (NOx, CO2) emission factors, particle number and mass (PM2.5) emission factors and size distributions were determined for a range of Boeing and Airbus aircraft, as a function of aircraft type and engine thrust level. The scientific complexities including the analysis of the often multimodal particle size distributions to describe the contributions of different particle source processes during the various stages of aircraft operation were addressed through comprehensive data analysis and interpretation. The measurement results were used to develop an inventory of aircraft emissions at BNE, including all modes of the aircraft LTO cycle and ground running procedures (GRP). Measurements of the actual duration of aircraft activity in each mode of operation (time-in-mode) and compiling a comprehensive matrix of gas and particle emission rates as a function of aircraft type and engine thrust level for real world situations was crucial for developing the inventory. The significance of the resulting matrix of emission rates in this study lies in the estimate it provides of the annual particle emissions due to aircraft operations, especially in terms of particle number. In summary, this PhD thesis presents for the first time a comprehensive study of the particle and NOx emission factors and rates along with the particle size distributions from aircraft operations and provides a basis for estimating such emissions at other airports. This is a significant addition to the scientific knowledge in terms of particle emissions from aircraft operations, since the standard particle number emissions rates are not currently available for aircraft activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper will investigate the suitability of existing performance measures under the assumption of a clearly defined benchmark. A range of measures are examined including the Sortino Ratio, the Sharpe Selection ratio (SSR), the Student’s t-test and a decay rate measure. A simulation study is used to assess the power and bias of these measures based on variations in sample size and mean performance of two simulated funds. The Sortino Ratio is found to be the superior performance measure exhibiting more power and less bias than the SSR when the distribution of excess returns are skewed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safety culture in the construction industry is a growing research area. The unique nature of construction industry works – being project-based, varying in size and focus, and relying on a highly transient subcontractor workforce – means that safety culture initiatives cannot be easily translated from other industries. This paper reports on the first study in a three year collaborative industry and university research project focusing on safety culture practices and development in one of Australia’s largest global construction organisations. The first round of a modified Delphi method is reported, and describes the insights gained from 41 safety leaders’ perceptions and understandings of safety culture within the organisation. In-depth, semi-structured interviews were conducted, and will be followed by a quantitative perception survey with the same sample. Participants included Senior Executives, Corporate Managers, Project Managers, Safety Managers and Site Supervisors. Leaders’ definitions and descriptions of safety culture were primarily action-oriented and some confusion was evident due to the sometimes implicit nature of culture in organisations. Leadership was identified as a key factor for positive safety culture in the organisation, and there was an emphasis on leaders demonstrating commitment to safety, and being visible to the project-based workforce. Barriers to safety culture improvement were also identified, with managers raising diverse issues such as the transient subcontractor workforce and the challenge of maintaining safety as a priority in the absence of safety incidents, under high production pressures. This research is unique in that it derived safety culture descriptions from key stakeholders within the organisation, as opposed to imposing traditional conceptualisations of safety culture that are not customised for the organisation or the construction industry more broadly. This study forms the foundation for integrating safety culture theory and practice in the construction industry, and will be extended upon in future studies within the research program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports safety leaders’ perceptions of safety culture in one of Australasia’s largest construction organisations. A modified Delphi method was used including two rounds of data collection. The first round involved 41 semi-structured interviews with safety leaders within the organisation. The second round involved an online quantitative perception survey, with the same sample, aimed at confirming the key themes identified in the interviews. Participants included Senior Executives, Corporate Managers, Project Managers, Safety Managers and Site Supervisors. Interview data was analysed using qualitative thematic analysis, and the survey data was analysed using descriptive statistics. Leaders’ definitions and descriptions of safety culture were primarily action-oriented and some confusion was evident due to the sometimes implicit nature of culture in organisations. Leadership was identified as a key factor for positive safety culture in the organisation, and there was an emphasis on leaders demonstrating commitment to safety, and being visible to the project-based workforce. Barriers to safety culture improvement were also identified, including the subcontractor management issues, pace of change, and reporting requirements. The survey data provided a quantitative confirmation of the interview themes, with some minor discrepancies. The findings highlight that safety culture is a complex construct, which is difficult to define, even for experts in the organisation. Findings on the key factors indicated consistency with the current literature; however the perceptions of barriers to safety culture offer a new understanding in to how safety culture operates in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple and effective down-sample algorithm, Peak-Hold-Down-Sample (PHDS) algorithm is developed in this paper to enable a rapid and efficient data transfer in remote condition monitoring applications. The algorithm is particularly useful for high frequency Condition Monitoring (CM) techniques, and for low speed machine applications since the combination of the high sampling frequency and low rotating speed will generally lead to large unwieldy data size. The effectiveness of the algorithm was evaluated and tested on four sets of data in the study. One set of the data was extracted from the condition monitoring signal of a practical industry application. Another set of data was acquired from a low speed machine test rig in the laboratory. The other two sets of data were computer simulated bearing defect signals having either a single or multiple bearing defects. The results disclose that the PHDS algorithm can substantially reduce the size of data while preserving the critical bearing defect information for all the data sets used in this work even when a large down-sample ratio was used (i.e., 500 times down-sampled). In contrast, the down-sample process using existing normal down-sample technique in signal processing eliminates the useful and critical information such as bearing defect frequencies in a signal when the same down-sample ratio was employed. Noise and artificial frequency components were also induced by the normal down-sample technique, thus limits its usefulness for machine condition monitoring applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Anti-Müllerian hormone (AMH) concentration reflects ovarian aging and is argued to be a useful predictor of age at menopause (AMP). It is hypothesized that AMH falling below a critical threshold corresponds to follicle depletion, which results in menopause. With this threshold, theoretical predictions of AMP can be made. Comparisons of such predictions with observed AMP from population studies support the role for AMH as a forecaster of menopause. Objective: The objective of the study was to investigate whether previous relationships between AMH and AMP are valid using a much larger data set. Setting: AMH was measured in 27 563 women attending fertility clinics. Study Design: From these data a model of age-related AMH change was constructed using a robust regression analysis. Data on AMP from subfertile women were obtained from the population-based Prospect-European Prospective Investigation into Cancer and Nutrition (Prospect- EPIC) cohort (n � 2249). By constructing a probability distribution of age at which AMH falls below a critical threshold and fitting this to Prospect-EPIC menopausal age data using maximum likelihood, such a threshold was estimated. Main Outcome: The main outcome was conformity between observed and predicted AMP. Results: To get a distribution of AMH-predicted AMP that fit the Prospect-EPIC data, we found the critical AMH threshold should vary among women in such a way that women with low age-specific AMH would have lower thresholds, whereas women with high age-specific AMH would have higher thresholds (mean 0.075 ng/mL; interquartile range 0.038–0.15 ng/mL). Such a varying AMH threshold for menopause is a novel and biologically plausible finding. AMH became undetectable (�0.2 ng/mL) approximately 5 years before the occurrence of menopause, in line with a previous report. Conclusions: The conformity of the observed and predicted distributions of AMP supports the hypothesis that declining population averages of AMH are associated with menopause, making AMH an excellent candidate biomarker for AMP prediction. Further research will help establish the accuracy of AMH levels to predict AMP within individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Medication remains the cornerstone treatment for mental illness. Cognition is one of the strongest predictors of non-adherence. The aim of this preliminary investigation was to examine the association between the Large Allen Cognitive Level Screen (LACLS) and medication adherence among a small sample of mental health service users to determine whether the LACLS has potential as a screening tool for capacity to manage medication regimens. Method: Demographic and clinical information was collected from a small sample of people who had recently accessed community mental health services. Participants then completed the LACLS and the Medication Adherence Rating Scale (MARS) at a single time point. The strength of association between the LACLS and MARS was examined using Spearman rank-order correlation. Results: A strong positive correlation between the LACLS and medication adherence (r = 0.71, p = 0.01) was evident. No participants reported the use of medication aids despite evidence of impaired cognitive functioning. Conclusion: This investigation has provided the first empirical evidence indicating that the LACLS may have utility as a screening instrument for capacity to manage medication adherence among this population. While promising, this finding should be interpreted with caveats given its preliminary nature.