247 resultados para Perfusion-weighted electroencephalography
Resumo:
Quantitative studies of nascent entrepreneurs such as GEM and PSED are required to generate their samples by screening the adult population, usually by phone in developed economies. Phone survey research has recently been challenged by shifting patterns of ownership and response rates of landline versus mobile (cell) phones, particularly for younger respondents. This challenge is acutely intense for entrepreneurship which is a strongly age-dependent phenomenon. Although shifting ownership rates have received some attention, shifting response rates have remained largely unexplored. For the Australian GEM 2010 adult population study we conducted a dual-frame approach that allows comparison between samples of mobile and landline phones. We find a substantial response bias towards younger, male and metropolitan respondents for mobile phones – far greater than explained by ownership rates. We also found these response rate differences significantly biases the estimates of the prevalence of early stage entrepreneurship by both samples, even when each sample is weighted to match the Australian population.
Resumo:
Purpose: The purpose of this review was to present an in-depth analysis of literature identifying the extent of dropout from Internet-based treatment programmes for psychological disorders, and literature exploring the variables associated with dropout from such programmes. ----- ----- Methods: A comprehensive literature search was conducted on PSYCHINFO and PUBMED with the keywords: dropouts, drop out, dropout, dropping out, attrition, premature termination, termination, non-compliance, treatment, intervention, and program, each in combination with the key words Internet and web. A total of 19 studies published between 1990 and April 2009 and focusing on dropout from Internet-based treatment programmes involving minimal therapist contact were identified and included in the review. ----- ----- Results: Dropout ranged from 2 to 83% and a weighted average of 31% of the participants dropped out of treatment. A range of variables have been examined for their association with dropout from Internet-based treatment programmes for psychological disorders. Despite the numerous variables explored, evidence on any specific variables that may make an individual more likely to drop out of Internet-based treatment is currently limited. ----- ----- Conclusions: This review highlights the need for more rigorous and theoretically guided research exploring the variables associated with dropping out of Internet-based treatment for psychological disorders.
Resumo:
This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.
Resumo:
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Resumo:
While recent research has provided valuable information as to the composition of laser printer particles, their formation mechanisms, and explained why some printers are emitters whilst others are low emitters, fundamental questions relating to the potential exposure of office workers remained unanswered. In particular, (i) what impact does the operation of laser printers have on the background particle number concentration (PNC) of an office environment over the duration of a typical working day?; (ii) what is the airborne particle exposure to office workers in the vicinity of laser printers; (iii) what influence does the office ventilation have upon the transport and concentration of particles?; (iv) is there a need to control the generation of, and/or transport of particles arising from the operation of laser printers within an office environment?; (v) what instrumentation and methodology is relevant for characterising such particles within an office location? We present experimental evidence on printer temporal and spatial PNC during the operation of 107 laser printers within open plan offices of five buildings. We show for the first time that the eight-hour time-weighted average printer particle exposure is significantly less than the eight-hour time-weighted local background particle exposure, but that peak printer particle exposure can be greater than two orders of magnitude higher than local background particle exposure. The particle size range is predominantly ultrafine (< 100nm diameter). In addition we have established that office workers are constantly exposed to non-printer derived particle concentrations, with up to an order of magnitude difference in such exposure amongst offices, and propose that such exposure be controlled along with exposure to printer derived particles. We also propose, for the first time, that peak particle reference values be calculated for each office area analogous to the criteria used in Australia and elsewhere for evaluating exposure excursion above occupational hazardous chemical exposure standards. A universal peak particle reference value of 2.0 x 104 particles cm-3 has been proposed.
Resumo:
We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.
Resumo:
Background Birth weight and length have seasonal fluctuations. Previous analyses of birth weight by latitude effects identified seemingly contradictory results, showing both 6 and 12 monthly periodicities in weight. The aims of this paper are twofold: (a) to explore seasonal patterns in a large, Danish Medical Birth Register, and (b) to explore models based on seasonal exposures and a non-linear exposure-risk relationship. Methods Birth weight and birth lengths on over 1.5 million Danish singleton, live births were examined for seasonality. We modelled seasonal patterns based on linear, U- and J-shaped exposure-risk relationships. We then added an extra layer of complexity by modelling weighted population-based exposure patterns. Results The Danish data showed clear seasonal fluctuations for both birth weight and birth length. A bimodal model best fits the data, however the amplitude of the 6 and 12 month peaks changed over time. In the modelling exercises, U- and J-shaped exposure-risk relationships generate time series with both 6 and 12 month periodicities. Changing the weightings of the population exposure risks result in unexpected properties. A J-shaped exposure-risk relationship with a diminishing population exposure over time fitted the observed seasonal pattern in the Danish birth weight data. Conclusion In keeping with many other studies, Danish birth anthropometric data show complex and shifting seasonal patterns. We speculate that annual periodicities with non-linear exposure-risk models may underlie these findings. Understanding the nature of seasonal fluctuations can help generate candidate exposures.
Resumo:
As global warming entails new conditions for the built environment, the thermal and energy performance of existing buildings, which are designed based on current weather data, may become unclear and remain a great concern. Through building computer simulation and qualitative analysis of the weighted factor for the outdoor temperature impact on building energy and thermal performance, this paper investigates the sensitivity of different office building zoning to the potential global warming. A standard office building type is examined for all eight capital cities in Australia. Results show that comparing the middle and top floors, except for cool climate (i.e. Hobart), the ground floor appears to be the most sensitive to the effect of global warming and has the highest tendency for a overheating problem. From the analysis of the responses of different zone orientations to the outdoor air temperature increase, it is also found that there are widely varied responses between zone orientations, with South zone (in the southern hemisphere) being the most sensitive. With an increased external air temperature, the variation between different floors or zone orientations will become more significant, up to 53 percent increase of overheating hours in Darwin and 47 percent increase of cooling load in Hobart.
Resumo:
In previous research (Chung et al., 2009), the potential of the continuous risk profile (CRP) to proactively detect the systematic deterioration of freeway safety levels was presented. In this paper, this potential is investigated further, and an algorithm is proposed for proactively detecting sites where the collision rate is not sufficiently high to be classified as a high collision concentration location but where a systematic deterioration of safety level is observed. The approach proposed compares the weighted CRP across different years and uses the cumulative sum (CUSUM) algorithm to detect the sites where changes in collision rate are observed. The CRPs of the detected sites are then compared for reproducibility. When high reproducibility is observed, a growth factor is used for sequential hypothesis testing to determine if the collision profiles are increasing over time. Findings from applying the proposed method using empirical data are documented in the paper together with a detailed description of the method.
Resumo:
Objectives: This article reports on a culturally appropriate process of development of a smoke-free workplace policy within the peak Aboriginal Controlled Community Health Organisation in Victoria, Australia. Smoking is acknowledged as being responsible for at least 20% of all deaths in Aboriginal communities in Australia, and many Aboriginal health workers smoke. Methods: The smoke-free workplace policy was developed using the iterative, discursive and experience-based methodology of Participatory Action Research, combined with the culturally embedded concept of ‘having a yarn’. Results: Staff members initially identified smoking as a topic to be avoided within workplace discussions. This was due, in part, to grief (everyone had suffered a smoking related bereavement). Further, there was anxiety that discussing smoking would result in culturally difficult conflict. The use of yarning opened up a safe space for discussion and debate,enabling development of a policy that was accepted across the organisation. Conclusions: Within Aboriginal organisations, it is not sufficient to focus on the outcomes of policy development. Rather, due attention must be paid to the process employed in development of policy, particularly when that policy is directly related to an emotionally and communally weighted topic such as smoking.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Many academic researchers have conducted studies on the selection of design-build (DB) delivery method; however, there are few studies on the selection of DB operational variations, which poses challenges to many clients. The selection of DB operational variation is a multi-criteria decision making process that requires clients to objectively evaluate the performance of each DB operational variation with reference to the selection criteria. This evaluation process is often characterized by subjectivity and uncertainty. In order to resolve this deficiency, the current investigation aimed to establish a fuzzy multicriteria decision-making (FMCDM) model for selecting the most suitable DB operational variation. A three-round Delphi questionnaire survey was conducted to identify the selection criteria and their relative importance. A fuzzy set theory approach, namely the modified horizontal approach with the bisector error method, was applied to establish the fuzzy membership functions, which enables clients to perform quantitative calculations on the performance of each DB operational variation. The FMCDM was developed using the weighted mean method to aggregate the overall performance of DB operational variations with regard to the selection criteria. The proposed FMCDM model enables clients to perform quantitative calculations in a fuzzy decision-making environment and provides a useful tool to cope with different project attributes.
Resumo:
Battery powered bed movers are becoming increasingly common within the hospital setting. The use of powered bed movers is believed to result in reduced physical efforts required by health care workers, which may be associated with a decreased risk of occupation related injuries. However, little work has been conducted assessing how powered bed movers impact on levels of physiological strain and muscle activation for the user. The muscular efforts associated with moving hospital beds using three different methods; manual pushing, StaminaLift Bed Mover (SBM) and Gzunda Bed Mover (GBM)were measured on six male subjects. Fourteen muscles were assessed moving a weighted hospital bed along a standardized route in an Australian hospital environment. Trunk inclination and upper spine acceleration were also quantified. Powered bed movers exhibited significantly lower muscle activation levels than manual pushing for the majority of muscles. When using the SBM, users adopted a more upright posture which was maintained while performing different tasks (e.g. turning a corner, entering a lift), while trunk inclination varied considerably for manual pushing and the GBM. The reduction in lower back muscular activation levels and the load reducing effect of a more upright posture may result in lower incidence of lower back injury.
Resumo:
Introduction The suitability of video conferencing (VC) technology for clinical purposes relevant to geriatric medicine is still being established. This project aimed to determine the validity of the diagnosis of dementia via VC. Methods This was a multisite, noninferiority, prospective cohort study. Patients, aged 50 years and older, referred by their primary care physician for cognitive assessment, were assessed at 4 memory disorder clinics. All patients were assessed independently by 2 specialist physicians. They were allocated one face-to-face (FTF) assessment (Reference standard – usual clinical practice) and an additional assessment (either usual FTF assessment or a VC assessment) on the same day. Each specialist physician had access to the patient chart and the results of a battery of standardized cognitive assessments administered FTF by the clinic nurse. Percentage agreement (P0) and the weighted kappa statistic with linear weight (Kw) were used to assess inter-rater reliability across the 2 study groups on the diagnosis of dementia (cognition normal, impaired, or demented). Results The 205 patients were allocated to group: Videoconference (n = 100) or Standard practice (n = 105); 106 were men. The average age was 76 (SD 9, 51–95) and the average Standardized Mini-Mental State Examination Score was 23.9 (SD 4.7, 9–30). Agreement for the Videoconference group (P0= 0.71; Kw = 0.52; P < .0001) and agreement for the Standard Practice group (P0= 0.70; Kw = 0.50; P < .0001) were both statistically significant (P < .05). The summary kappa statistic of 0.51 (P = .84) indicated that VC was not inferior to FTF assessment. Conclusions Previous studies have shown that preliminary standardized assessment tools can be reliably administered and scored via VC. This study focused on the geriatric assessment component of the interview (interpretation of standardized assessments, taking a history and formulating a diagnosis by medical specialist) and identified high levels of agreement for diagnosing dementia. A model of service incorporating either local or remote administered standardized assessments, and remote specialist assessment, is a reliable process for enabling the diagnosis of dementia for isolated older adults.