843 resultados para Sensor of electric measures
Resumo:
A diagnosis of cancer represents a significant crisis for the child and their family. As the treatment for childhood cancer has improved dramatically over the past three decades, most children diagnosed with cancer today survive this illness. However, it is still an illness which severely disrupts the lifestyle and typical functioning of the family unit. Most treatments for cancer involve lengthy hospital stays, the endurance of painful procedures and harsh side effects. Research has confirmed that to manage and adapt to such a crisis, families must undertake measures which assist their adjustment. Variables such as level of family support, quality of parents’ marital relationship, coping of other family members, lack of other concurrent stresses and open communication within the family have been identified as influences on how well families adjust to a diagnosis of childhood cancer. Theoretical frameworks such as the Resiliency Model of Family Adjustment and Adaptation (McCubbin and McCubbin, 1993, 1996) and the Stress and Coping Model by Lazarus and Folkman (1984) have been used to explain how families and individuals adapt to crises or adverse circumstances. Developmental theories have also been posed to account for how children come to understand and learn about the concept of illness. However more descriptive information about how families and children in particular, experience and manage a diagnosis of cancer is still needed. There are still many unanswered questions surrounding how a child adapts to, understands and makes meaning from having a life-threatening illness. As a result, developing an understanding of the impact that such a serious illness has on the child and their family is crucial. A new approach to examining childhood illness such as cancer is currently underway which allows for a greater understanding of the experience of childhood cancer to be achieved. This new approach invites a phenomenological method to investigate the perspectives of those affected by childhood cancer. In the current study 9 families in which there was a diagnosis of childhood cancer were interviewed twice over a 12 month period. Using the qualitative methodology of Interpretative Phenomenological Analysis (IPA) a semi-structured interview was used to explicate the experience of childhood cancer from both the parent and child’s perspectives. A number of quantitative measures were also administered to gather specific information on the demographics of the sample population. The results of this study revealed a number of pertinent areas which need to be considered when treating such families. More importantly experiences were explicated which revealed vital phenomena that needs to be added to extend current theoretical frameworks. Parents identified the time of the diagnosis as the hardest part of their entire experience. Parents experienced an internal struggle when they were forced to come to the realization that they were not able to help their child get well. Families demonstrated an enormous ability to develop a new lifestyle which accommodated the needs of the sick child, as the sick child became the focus of their lives. Regarding the children, many of them accepted their diagnosis without complaint or question, and they were able to recognise and appreciate the support they received. Physical pain was definitely a component of the children’s experience however the emotional strain of loss of peer contact seemed just as severe. Changes over time were also noted as both parental and child experiences were often pertinent to the stage of treatment the child had reached. The approach used in this study allowed for rich and intimate detail about a sensitive issue to be revealed. Such an approach also allowed for the experience of childhood cancer on parents and the children to be more fully realised. Only now can a comprehensive and sensitive medical and psychosocial approach to the child and family be developed. For example, families may benefit from extra support at the time of diagnosis as this was identified as one of the most difficult periods. Parents may also require counselling support in coming to terms with their lack of ability to help their child heal. Given the ease at which children accepted their diagnosis, we need to question whether children are more receptive to adversity. Yet the emotional struggle children battled as a result of their illness also needs to be addressed.
Resumo:
26 tinnitus patients received either electromyogram (EMG) biofeedback with counterdemand instructions, EMG biofeedback with neutral demand instructions, or no treatment. Assessment was conducted on self-report measures of the distress associated with tinnitus, the loudness, annoyance and awareness of tinnitus, sleep-onset difficulties, depression, and anxiety. Audiological assessment of tinnitus was also conducted and EMG levels were measured (the latter only in the 2 treatment groups). No significant treatment effects were found on any of the measures. There was a significant decrease in the ratings of tinnitus awareness over the assessment occasions, but the degree of change was equivalent for treated and untreated groups. Results do not support the assertion that EMG biofeedback is an effective treatment for tinnitus.
Resumo:
This research investigates wireless intrusion detection techniques for detecting attacks on IEEE 802.11i Robust Secure Networks (RSNs). Despite using a variety of comprehensive preventative security measures, the RSNs remain vulnerable to a number of attacks. Failure of preventative measures to address all RSN vulnerabilities dictates the need for a comprehensive monitoring capability to detect all attacks on RSNs and also to proactively address potential security vulnerabilities by detecting security policy violations in the WLAN. This research proposes novel wireless intrusion detection techniques to address these monitoring requirements and also studies correlation of the generated alarms across wireless intrusion detection system (WIDS) sensors and the detection techniques themselves for greater reliability and robustness. The specific outcomes of this research are: A comprehensive review of the outstanding vulnerabilities and attacks in IEEE 802.11i RSNs. A comprehensive review of the wireless intrusion detection techniques currently available for detecting attacks on RSNs. Identification of the drawbacks and limitations of the currently available wireless intrusion detection techniques in detecting attacks on RSNs. Development of three novel wireless intrusion detection techniques for detecting RSN attacks and security policy violations in RSNs. Development of algorithms for each novel intrusion detection technique to correlate alarms across distributed sensors of a WIDS. Development of an algorithm for automatic attack scenario detection using cross detection technique correlation. Development of an algorithm to automatically assign priority to the detected attack scenario using cross detection technique correlation.
Resumo:
Association rule mining is one technique that is widely used when querying databases, especially those that are transactional, in order to obtain useful associations or correlations among sets of items. Much work has been done focusing on efficiency, effectiveness and redundancy. There has also been a focusing on the quality of rules from single level datasets with many interestingness measures proposed. However, with multi-level datasets now being common there is a lack of interestingness measures developed for multi-level and cross-level rules. Single level measures do not take into account the hierarchy found in a multi-level dataset. This leaves the Support-Confidence approach,which does not consider the hierarchy anyway and has other drawbacks, as one of the few measures available. In this paper we propose two approaches which measure multi-level association rules to help evaluate their interestingness. These measures of diversity and peculiarity can be used to help identify those rules from multi-level datasets that are potentially useful.
Resumo:
The overall research aims to develop a standardised instrument to measure the impacts resulting from contemporary Information Systems (IS). The research adopts the IS-Impact measurement model, introduced by Gable et al, (2008), as its theoretical foundation, and applies the extension strategy described by Berthon et al. (2002); extending both theory and the context, where the new context is the Human Resource (HR) system. The research will be conducted in two phases, the exploratory phase and the specification phase. The purpose of this paper is to present the findings of the exploratory phase. 134 respondents from a major Australian University were involved in this phase. The findings have supported most of the existing IS-Impact model’s credibility. However, some textual data may suggest new measures for the IS-Impact model, while the low response rate or the averting of some may suggest the elimination of some measures from the model.
Resumo:
The Longitudinal Study of Australian Children (LSAC) is a major national study examining the lives of Australian children, using a cross-sequential cohort design and data from parents, children, and teachers for 5,107 infants (3–19 months) and 4,983 children (4–5 years). Its data are publicly accessible and are used by researchers from many disciplinary backgrounds. It contains multiple measures of children’s developmental outcomes as well as a broad range of information on the contexts of their lives. This paper reports on the development of summary outcome indices of child development using the LSAC data. The indices were developed to fill the need for indicators suitable for use by diverse data users in order to guide government policy and interventions which support young children’s optimal development. The concepts underpinning the indices and the methods of their development are presented. Two outcome indices (infant and child) were developed, each consisting of three domains—health and physical development, social and emotional functioning, and learning competency. A total of 16 measures are used to make up these three domains in the Outcome Index for the Child Cohort and six measures for the Infant Cohort. These measures are described and evidence supporting the structure of the domains and their underlying latent constructs is provided for both cohorts. The factorial structure of the Outcome Index is adequate for both cohorts, but was stronger for the child than infant cohort. It is concluded that the LSAC Outcome Index is a parsimonious measure representing the major components of development which is suitable for non-specialist data users. A companion paper (Sanson et al. 2010) presents evidence of the validity of the Index.
Resumo:
This naturalistic study investigated the mechanisms of change in measures of negative thinking and in 24-h urinary metabolites of noradrenaline (norepinephrine), dopamine and serotonin in a sample of 43 depressed hospital patients attending an eight-session group cognitive behavior therapy program. Most participants (91%) were taking antidepressant medication throughout the therapy period according to their treating Psychiatrists' prescriptions. The sample was divided into outcome categories (19 Responders and 24 Non-responders) on the basis of a clinically reliable change index [Jacobson, N.S., & Truax, P., 1991. Clinical significance: a statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19.] applied to the Beck Depression Inventory scores at the end of the therapy. Results of repeated measures analysis of variance [ANOVA] analyses of variance indicated that all measures of negative thinking improved significantly during therapy, and significantly more so in the Responders as expected. The treatment had a significant impact on urinary adrenaline and metadrenaline excretion however, these changes occurred in both Responders and Non-responders. Acute treatment did not significantly influence the six other monoamine metabolites. In summary, changes in urinary monoamine levels during combined treatment for depression were not associated with self-reported changes in mood symptoms.
Resumo:
Habitat models are widely used in ecology, however there are relatively few studies of rare species, primarily because of a paucity of survey records and lack of robust means of assessing accuracy of modelled spatial predictions. We investigated the potential of compiled ecological data in developing habitat models for Macadamia integrifolia, a vulnerable mid-stratum tree endemic to lowland subtropical rainforests of southeast Queensland, Australia. We compared performance of two binomial models—Classification and Regression Trees (CART) and Generalised Additive Models (GAM)—with Maximum Entropy (MAXENT) models developed from (i) presence records and available absence data and (ii) developed using presence records and background data. The GAM model was the best performer across the range of evaluation measures employed, however all models were assessed as potentially useful for informing in situ conservation of M. integrifolia, A significant loss in the amount of M. integrifolia habitat has occurred (p < 0.05), with only 37% of former habitat (pre-clearing) remaining in 2003. Remnant patches are significantly smaller, have larger edge-to-area ratios and are more isolated from each other compared to pre-clearing configurations (p < 0.05). Whilst the network of suitable habitat patches is still largely intact, there are numerous smaller patches that are more isolated in the contemporary landscape compared with their connectedness before clearing. These results suggest that in situ conservation of M. integrifolia may be best achieved through a landscape approach that considers the relative contribution of small remnant habitat fragments to the species as a whole, as facilitating connectivity among the entire network of habitat patches.
Resumo:
Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.
Resumo:
Monotony has been identified as a contributing factor to road crashes. Drivers’ ability to react to unpredictable events deteriorates when exposed to highly predictable and uneventful driving tasks, such as driving on Australian rural roads, many of which are monotonous by nature. Highway design in particular attempts to reduce the driver’s task to a merely lane-keeping one. Such a task provides little stimulation and is monotonous, thus affecting the driver’s attention which is no longer directed towards the road. Inattention contributes to crashes, especially for professional drivers. Monotony has been studied mainly from the endogenous perspective (for instance through sleep deprivation) without taking into account the influence of the task itself (repetitiveness) or the surrounding environment. The aim and novelty of this thesis is to develop a methodology (mathematical framework) able to predict driver lapses of vigilance under monotonous environments in real time, using endogenous and exogenous data collected from the driver, the vehicle and the environment. Existing approaches have tended to neglect the specificity of task monotony, leaving the question of the existence of a “monotonous state” unanswered. Furthermore the issue of detecting vigilance decrement before it occurs (predictions) has not been investigated in the literature, let alone in real time. A multidisciplinary approach is necessary to explain how vigilance evolves in monotonous conditions. Such an approach needs to draw on psychology, physiology, road safety, computer science and mathematics. The systemic approach proposed in this study is unique with its predictive dimension and allows us to define, in real time, the impacts of monotony on the driver’s ability to drive. Such methodology is based on mathematical models integrating data available in vehicles to the vigilance state of the driver during a monotonous driving task in various environments. The model integrates different data measuring driver’s endogenous and exogenous factors (related to the driver, the vehicle and the surrounding environment). Electroencephalography (EEG) is used to measure driver vigilance since it has been shown to be the most reliable and real time methodology to assess vigilance level. There are a variety of mathematical models suitable to provide a framework for predictions however, to find the most accurate model, a collection of mathematical models were trained in this thesis and the most reliable was found. The methodology developed in this research is first applied to a theoretically sound measure of sustained attention called Sustained Attention Response to Task (SART) as adapted by Michael (2010), Michael and Meuter (2006, 2007). This experiment induced impairments due to monotony during a vigilance task. Analyses performed in this thesis confirm and extend findings from Michael (2010) that monotony leads to an important vigilance impairment independent of fatigue. This thesis is also the first to show that monotony changes the dynamics of vigilance evolution and tends to create a “monotonous state” characterised by reduced vigilance. Personality traits such as being a low sensation seeker can mitigate this vigilance decrement. It is also evident that lapses in vigilance can be predicted accurately with Bayesian modelling and Neural Networks. This framework was then applied to the driving task by designing a simulated monotonous driving task. The design of such task requires multidisciplinary knowledge and involved psychologist Rebecca Michael. Monotony was varied through both the road design and the road environment variables. This experiment demonstrated that road monotony can lead to driving impairment. Particularly monotonous road scenery was shown to have the most impact compared to monotonous road design. Next, this study identified a variety of surrogate measures that are correlated with vigilance levels obtained from the EEG. Such vigilance states can be predicted with these surrogate measures. This means that vigilance decrement can be detected in a car without the use of an EEG device. Amongst the different mathematical models tested in this thesis, only Neural Networks predicted the vigilance levels accurately. The results of both these experiments provide valuable information about the methodology to predict vigilance decrement. Such an issue is quite complex and requires modelling that can adapt to highly inter-individual differences. Only Neural Networks proved accurate in both studies, suggesting that these models are the most likely to be accurate when used on real roads or for further research on vigilance modelling. This research provides a better understanding of the driving task under monotonous conditions. Results demonstrate that mathematical modelling can be used to determine the driver’s vigilance state when driving using surrogate measures identified during this study. This research has opened up avenues for future research and could result in the development of an in-vehicle device predicting driver vigilance decrement. Such a device could contribute to a reduction in crashes and therefore improve road safety.
Resumo:
Public and private sector organisations are now able to capture and utilise data on a vast scale, thus heightening the importance of adequate measures for protecting unauthorised disclosure of personal information. In this respect, data breach notification has emerged as an issue of increasing importance throughout the world. It has been the subject of law reform in the United States and in other jurisdictions. This article reviews US, Australian and EU legal developments regarding the mandatory notification of data breaches. The authors highlight areas of concern based on the extant US experience that require further consideration in Australia and in the EU.
Resumo:
In this article our starting point is the current context of national curriculum change and intense speculation about the assessment, standards and reporting. It is written against a background of accountability measures and improvement imperatives, and focuses attention on standards as offering representations of quality. We understand standards to be constructs that aim to achieve public credibility and utility. Further, they can be examined for the purposes they seek to serve and also their expected functions. Fitness for purpose is therefore a useful notion in considering the nature of standards. Our interest in the discussion is the ‘fit’ between how standards are formulated and how they are used in practice, by whom and for what purposes. A related interest is in the matter of how standards can be harnessed to realise improvement.
Resumo:
A study was done to develop macrolevel crash prediction models that can be used to understand and identify effective countermeasures for improving signalized highway intersections and multilane stop-controlled highway intersections in rural areas. Poisson and negative binomial regression models were fit to intersection crash data from Georgia, California, and Michigan. To assess the suitability of the models, several goodness-of-fit measures were computed. The statistical models were then used to shed light on the relationships between crash occurrence and traffic and geometric features of the rural signalized intersections. The results revealed that traffic flow variables significantly affected the overall safety performance of the intersections regardless of intersection type and that the geometric features of intersections varied across intersection type and also influenced crash type.
Resumo:
To maximise the capacity of the rail lineand provide a reliable service for pas-sengers throughout the day, regulation of train service to maintain steady service headway is es-sential. In most current metro systems, train usually starts coasting at a fixed distance from the departed station to achieve service regulation. However, this approach is only effective with re-spect to a nominal operational condition of train schedule but not necessarily the current service demand. Moreover, it is not simply to identify the necessary starting point for coasting under the run time constraints of current service conditions since train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an ap-plication of classical measures to search for the appropriate coasting point to meet a specified inter-station run time and they can be integrated in the on-board Automatic Train Operation (ATO) system and have the potential for on-line implementation in making a set of coasting command decisions.
Resumo:
On 12 June 2006, the lights went out in New Zealand’s largest city and major commercial centre, Auckland. Business was disrupted and many thousands of people inconvenienced. The unscheduled power cut was the latest in a series of electric power problems in New Zealand over the past decade. Attention turned to state-owned enterprise [SOE] Transpower, which was in charge of maintaining and developing New Zealand’s national electricity grid. The problem of 12 June was traced to two shackles in poor condition, small but essential parts of the electricity grid infrastructure. Closer examination of New Zealand’s electricity sector indicated these shackles were merely the tip of a power supply iceberg. Transpower’s Chief Executive, Ralph Craven, was now answerable to the Prime Minister for the issues creating the problems, and a workable solution to fix them. Transpower Chief Executive Ralph Craven needed to produce answers that went well beyond the problem of the two faulty shackles. The power crisis had brought to the fore wider issues of roles, responsibilities, and expectations in relation to the supply of electric power in New Zealand. Transpower was contending with these issues on a daily basis; however, the incident on 12 June publicly highlighted the urgent need for solutions that served the stakeholders in this critical industry.