134 resultados para time monitoring
Resumo:
Staphylococcus aureus is a common pathogen that causes a variety of infections including soft tissue infections, impetigo, septicemia toxic shock and scalded skin syndrome. Traditionally, Methicillin-Resistant Staphylococcus aureus (MRSA) was considered a Hospital-Acquired (HA) infection. It is now recognised that the frequency of infections with MRSA is increasing in the community, and that these infections are not originating from hospital environments. A 2007 report by the Centers for Disease Control and Prevention (CDC) stated that Staphylococcus aureus is the most important cause of serious and fatal infections in the USA. Community-Acquired MRSA (CA-MRSA) are genetically diverse and distinct, meaning they are able to be identified and tracked by way of genotyping. Genotyping of MRSA using Single nucleotide polymorphisms (SNPs) is a rapid and robust method for monitoring MRSA, specifically ST93 (Queensland Clone) dissemination in the community. It has been shown that a large proportion of CA-MRSA infections in Queensland and New South Wales are caused by ST93. The rationale for this project was that SNP analysis of MLST genes is a rapid and cost-effective method for genotyping and monitoring MRSA dissemination in the community. In this study, 16 different sequence types (ST) were identified with 41% of isolates identified as ST93 making it the predominate clone. Males and Females were infected equally with an average patient age of 45yrs. Phenotypically, all of the ST93 had an identical antimicrobial resistance pattern. They were resistant to the β-lactams – Penicillin, Flu(di)cloxacillin and Cephalothin but sensitive to all other antibiotics tested. Virulence factors play an important role in allowing S. aureus to cause disease by way of colonising, replication and damage to the host. One virulence factor of particular interest is the toxin Panton-Valentine leukocidin (PVL), which is composed of two separate proteins encoded by two adjacent genes. PVL positive CA-MRSA are shown to cause recurrent, chronic or severe skin and soft tissue infections. As a result, it is important that PVL positive CA-MRSA is genotyped and tracked. Especially now that CA-MRSA infections are more prevalent than HA-MRSA infections and are now deemed endemic in Australia. 98% of all isolates in this study tested positive for the PVL toxin gene. This study showed that PVL is present in many different community based ST, not just ST93, which were all PVL positive. With this toxin becoming entrenched in CA-MRSA, genotyping would provide more accurate data and a way of tracking the dissemination. PVL gene can be sub-typed using an allele-specific Real-Time PCR (RT-PCR) followed by High resolution meltanalysis. This allows the identification of PVL subtypes within the CA-MRSA population and allow the tracking of these clones in the community.
Resumo:
The authors currently engage in two projects to improve human-computer interaction (HCI) designs that can help conserve resources. The projects explore motivation and persuasion strategies relevant to ubiquitous computing systems that bring real-time consumption data into the homes and hands of residents in Brisbane, Australia. The first project seeks to increase understanding among university staff of the tangible and negative effects that excessive printing has on the workplace and local environment. The second project seeks to shift attitudes toward domestic energy conservation through software and hardware that monitor real-time, in situ electricity consumption in homes across Queensland. The insights drawn from these projects will help develop resource consumption user archetypes, providing a framework linking people to differing interface design requirements.
Resumo:
Managing the sustainability of urban infrastructure requires regular health monitoring of key infrastructure such as bridges. The process of structural health monitoring involves monitoring a structure over a period of time using appropriate sensors, extracting damage sensitive features from the measurements made by the sensors, and analysing these features to determine the current state of the structure. Various techniques are available for structural health monitoring of structures, and acoustic emission is one technique that is finding an increasing use in the monitoring of civil infrastructures such as bridges. Acoustic emission technique is based on the recording of stress waves generated by rapid release of energy inside a material, followed by analysis of recorded signals to locate and identify the source of emission and assess its severity. This chapter first provides a brief background of the acoustic emission technique and the process of source localization. Results from laboratory experiments conducted to explore several aspects of the source localization process are also presented. The findings from the study can be expected to enhance knowledge of the acoustic emission process, and to aid the development of effective bridge structure diagnostics systems.
Resumo:
Background: A number of epidemiological studies have been conducted to research the adverse effects of air pollution on mortality and morbidity. Hypertension is the most important risk factor for cardiovascular mortality. However, few previous studies have examined the relationship between gaseous air pollution and morbidity for hypertension. ---------- Methods: Daily data on emergency hospital visits (EHVs) for hypertension were collected from the Peking University Third Hospital. Daily data on gaseous air pollutants (sulfur dioxide (SO2) and nitrogen dioxide (NO2)) and particulate matter less than 10 μm in aerodynamic diameter (PM10) were collected from the Beijing Municipal Environmental Monitoring Center. A time-stratified case-crossover design was conducted to evaluate the relationship between urban gaseous air pollution and EHVs for hypertension. Temperature and relative humidity were controlled for. ---------- Results: In the single air pollutant models, a 10 μg/m3 increase in SO2 and NO2 were significantly associated with EHVs for hypertension. The odds ratios (ORs) were 1.037 (95% confidence interval (CI): 1.004-1.071) for SO2 at lag 0 day, and 1.101 (95% CI: 1.038-1.168) for NO2 at lag 3 day. After controlling for PM10, the ORs associated with SO2 and NO2 were 1.025 (95% CI: 0.987-1.065) and 1.114 (95% CI: 1.037-1.195), respectively.---------- Conclusion: Elevated urban gaseous air pollution was associated with increased EHVs for hypertension in Beijing, China.
Resumo:
This thesis employs the theoretical fusion of disciplinary knowledge, interlacing an analysis from both functional and interpretive frameworks and applies these paradigms to three concepts—organisational identity, the balanced scorecard performance measurement system, and control. As an applied thesis, this study highlights how particular public sector organisations are using a range of multi-disciplinary forms of knowledge constructed for their needs to achieve practical outcomes. Practical evidence of this study is not bound by a single disciplinary field or the concerns raised by academics about the rigorous application of academic knowledge. The study’s value lies in its ability to explore how current communication and accounting knowledge is being used for practical purposes in organisational life. The main focus of this thesis is on identities in an organisational communication context. In exploring the theoretical and practical challenges, the research questions for this thesis were formulated as: 1. Is it possible to effectively control identities in organisations by the use of an integrated performance measurement system—the balanced scorecard—and if so, how? 2. What is the relationship between identities and an integrated performance measurement system—the balanced scorecard—in the identity construction process? Identities in the organisational context have been extensively discussed in graphic design, corporate communication and marketing, strategic management, organisational behaviour, and social psychology literatures. Corporate identity is the self-presentation of the personality of an organisation (Van Riel, 1995; Van Riel & Balmer, 1997), and organisational identity is the statement of central characteristics described by members (Albert & Whetten, 2003). In this study, identity management is positioned as a strategically complex task, embracing not only logo and name, but also multiple dimensions, levels and facets of organisational life. Responding to the collaborative efforts of researchers and practitioners in identity conceptualisation and methodological approaches, this dissertation argues that analysis can be achieved through the use of an integrated framework of identity products, patternings and processes (Cornelissen, Haslam, & Balmer, 2007), transforming conceptualisations of corporate identity, organisational identity and identification studies. Likewise, the performance measurement literature from the accounting field now emphasises the importance of ‘soft’ non-financial measures in gauging performance—potentially allowing the monitoring and regulation of ‘collective’ identities (Cornelissen et al., 2007). The balanced scorecard (BSC) (Kaplan & Norton, 1996a), as the selected integrated performance measurement system, quantifies organisational performance under the four perspectives of finance, customer, internal process, and learning and growth. Broadening the traditional performance measurement boundary, the BSC transforms how organisations perceived themselves (Vaivio, 2007). The rhetorical and communicative value of the BSC has also been emphasised in organisational self-understanding (Malina, Nørreklit, & Selto, 2007; Malmi, 2001; Norreklit, 2000, 2003). Thus, this study establishes a theoretical connection between the controlling effects of the BSC and organisational identity construction. Common to both literatures, the aspects of control became the focus of this dissertation, as ‘the exercise or act of achieving a goal’ (Tompkins & Cheney, 1985, p. 180). This study explores not only traditional technical and bureaucratic control (Edwards, 1981), but also concertive control (Tompkins & Cheney, 1985), shifting the locus of control to employees who make their own decisions towards desired organisational premises (Simon, 1976). The controlling effects on collective identities are explored through the lens of the rhetorical frames mobilised through the power of organisational enthymemes (Tompkins & Cheney, 1985) and identification processes (Ashforth, Harrison, & Corley, 2008). In operationalising the concept of control, two guiding questions were developed to support the research questions: 1.1 How does the use of the balanced scorecard monitor identities in public sector organisations? 1.2 How does the use of the balanced scorecard regulate identities in public sector organisations? This study adopts qualitative multiple case studies using ethnographic techniques. Data were gathered from interviews of 41 managers, organisational documents, and participant observation from 2003 to 2008, to inform an understanding of organisational practices and members’ perceptions in the five cases of two public sector organisations in Australia. Drawing on the functional and interpretive paradigms, the effective design and use of the systems, as well as the understanding of shared meanings of identities and identifications are simultaneously recognised. The analytical structure guided by the ‘bracketing’ (Lewis & Grimes, 1999) and ‘interplay’ strategies (Schultz & Hatch, 1996) preserved, connected and contrasted the unique findings from the multi-paradigms. The ‘temporal bracketing’ strategy (Langley, 1999) from the process view supports the comparative exploration of the analysis over the periods under study. The findings suggest that the effective use of the BSC can monitor and regulate identity products, patternings and processes. In monitoring identities, the flexible BSC framework allowed the case study organisations to monitor various aspects of finance, customer, improvement and organisational capability that included identity dimensions. Such inclusion legitimises identity management as organisational performance. In regulating identities, the use of the BSC created a mechanism to form collective identities by articulating various perspectives and causal linkages, and through the cascading and alignment of multiple scorecards. The BSC—directly reflecting organisationally valued premises and legitimised symbols—acted as an identity product of communication, visual symbols and behavioural guidance. The selective promotion of the BSC measures filtered organisational focus to shape unique identity multiplicity and characteristics within the cases. Further, the use of the BSC facilitated the assimilation of multiple identities by controlling the direction and strength of identifications, engaging different groups of members. More specifically, the tight authority of the BSC framework and systems are explained both by technical and bureaucratic controls, while subtle communication of organisational premises and information filtering is achieved through concertive control. This study confirms that these macro top-down controls mediated the sensebreaking and sensegiving process of organisational identification, supporting research by Ashforth, Harrison and Corley (2008). This study pays attention to members’ power of self-regulation, filling minor premises of the derived logic of their organisation through the playing out of organisational enthymemes (Tompkins & Cheney, 1985). Members are then encouraged to make their own decisions towards the organisational premises embedded in the BSC, through the micro bottom-up identification processes including: enacting organisationally valued identities; sensemaking; and the construction of identity narratives aligned with those organisationally valued premises. Within the process, the self-referential effect of communication encouraged members to believe the organisational messages embedded in the BSC in transforming collective and individual identities. Therefore, communication through the use of the BSC continued the self-producing of normative performance mechanisms, established meanings of identities, and enabled members’ self-regulation in identity construction. Further, this research establishes the relationship between identity and the use of the BSC in terms of identity multiplicity and attributes. The BSC framework constrained and enabled case study organisations and members to monitor and regulate identity multiplicity across a number of dimensions, levels and facets. The use of the BSC constantly heightened the identity attributes of distinctiveness, relativity, visibility, fluidity and manageability in identity construction over time. Overall, this research explains the reciprocal controlling relationships of multiple structures in organisations to achieve a goal. It bridges the gap among corporate and organisational identity theories by adopting Cornelissen, Haslam and Balmer’s (2007) integrated identity framework, and reduces the gap in understanding between identity and performance measurement studies. Parallel review of the process of monitoring and regulating identities from both literatures synthesised the theoretical strengths of both to conceptualise and operationalise identities. This study extends the discussion on positioning identity, culture, commitment, and image and reputation measures in integrated performance measurement systems as organisational capital. Further, this study applies understanding of the multiple forms of control (Edwards, 1979; Tompkins & Cheney, 1985), emphasising the power of organisational members in identification processes, using the notion of rhetorical organisational enthymemes. This highlights the value of the collaborative theoretical power of identity, communication and performance measurement frameworks. These case studies provide practical insights about the public sector where existing bureaucracy and desired organisational identity directions are competing within a large organisational setting. Further research on personal identity and simple control in organisations that fully cascade the BSC down to individual members would provide enriched data. The extended application of the conceptual framework to other public and private sector organisations with a longitudinal view will also contribute to further theory building.
Resumo:
Aims: Driving Under the Influence (DUI) enforcement can be a broad screening mechanism for alcohol and other drug problems. The current response to DUI is focused on using mechanical means to prevent inebriated persons from driving, with little attention the underlying substance abuse problems. ---------- Methods: This is a secondary analysis of an administrative dataset of over 345,000 individuals who entered Texas substance abuse treatment between 2005 and 2008. Of these, 36,372 were either on DUI probation, referred to treatment by probation, or had a DUI arrest in the past year. The DUI offenders were compared on demographic characteristics, substance use patterns, and levels of impairment with those who were not DUI offenders and first DUI offenders were compared with those with more than one past-year offense. T tests and chi square tests were used to determine significance. ---------- Results: DUI offenders were more likely to be employed, to have a problem with alcohol, to report more past-year arrests for any offense, to be older, and to have used alcohol and drugs longer than the non-DUI clients who reported higher ASI scores and were more likely to use daily. Those with one past-year DUI arrest were more likely to have problems with drugs other than alcohol and were less impaired than those with two or more arrests based on their ASI scores and daily use. Non-DUI clients reported higher levels of mood disorders than DUIs but there was no difference in their diagnosis of anxiety. Similar findings were found between those with one or multiple DUI arrests. ----------Conclusion: Although first-time DUIs were not as impaired as non-DUI clients, their levels of impairment were sufficient to cause treatment. Screening and brief intervention at arrest for all DUI offenders and treatment in combination with abstinence monitoring could decrease future recidivism.
Resumo:
Ocean processes are dynamic and complex events that occur on multiple different spatial and temporal scales. To obtain a synoptic view of such events, ocean scientists focus on the collection of long-term time series data sets. Generally, these time series measurements are continually provided in real or near-real time by fixed sensors, e.g., buoys and moorings. In recent years, an increase in the utilization of mobile sensor platforms, e.g., Autonomous Underwater Vehicles, has been seen to enable dynamic acquisition of time series data sets. However, these mobile assets are not utilized to their full capabilities, generally only performing repeated transects or user-defined patrolling loops. Here, we provide an extension to repeated patrolling of a designated area. Our algorithms provide the ability to adapt a standard mission to increase information gain in areas of greater scientific interest. By implementing a velocity control optimization along the predefined path, we are able to increase or decrease spatiotemporal sampling resolution to satisfy the sampling requirements necessary to properly resolve an oceanic phenomenon. We present a path planning algorithm that defines a sampling path, which is optimized for repeatability. This is followed by the derivation of a velocity controller that defines how the vehicle traverses the given path. The application of these tools is motivated by an ongoing research effort to understand the oceanic region off the coast of Los Angeles, California. The computed paths are implemented with the computed velocities onto autonomous vehicles for data collection during sea trials. Results from this data collection are presented and compared for analysis of the proposed technique.
Resumo:
Autonomous Underwater Vehicles (AUVs) are revolutionizing oceanography through their versatility, autonomy and endurance. However, they are still an underutilized technology. For coastal operations, the ability to track a certain feature is of interest to ocean scientists. Adaptive and predictive path planning requires frequent communication with significant data transfer. Currently, most AUVs rely on satellite phones as their primary communication. This communication protocol is expensive and slow. To reduce communication costs and provide adequate data transfer rates, we present a hardware modification along with a software system that provides an alternative robust disruption- tolerant communications framework enabling cost-effective glider operation in coastal regions. The framework is specifically designed to address multi-sensor deployments. We provide a system overview and present testing and coverage data for the network. Additionally, we include an application of ocean-model driven trajectory design, which can benefit from the use of this network and communication system. Simulation and implementation results are presented for single and multiple vehicle deployments. The presented combination of infrastructure, software development and deployment experience brings us closer to the goal of providing a reliable and cost-effective data transfer framework to enable real-time, optimal trajectory design, based on ocean model predictions, to gather in situ measurements of interesting and evolving ocean features and phenomena.
Resumo:
Autonomous underwater vehicles (AUVs) are increasingly used, both in military and civilian applications. These vehicles are limited mainly by the intelligence we give them and the life of their batteries. Research is active to extend vehicle autonomy in both aspects. Our intent is to give the vehicle the ability to adapt its behavior under different mission scenarios (emergency maneuvers versus long duration monitoring). This involves a search for optimal trajectories minimizing time, energy or a combination of both. Despite some success stories in AUV control, optimal control is still a very underdeveloped area. Adaptive control research has contributed to cost minimization problems, but vehicle design has been the driving force for advancement in optimal control research. We look to advance the development of optimal control theory by expanding the motions along which AUVs travel. Traditionally, AUVs have taken the role of performing the long data gathering mission in the open ocean with little to no interaction with their surroundings, MacIver et al. (2004). The AUV is used to find the shipwreck, and the remotely operated vehicle (ROV) handles the exploration up close. AUV mission profiles of this sort are best suited through the use of a torpedo shaped AUV, Bertram and Alvarez (2006), since straight lines and minimal (0 deg - 30 deg) angular displacements are all that are necessary to perform the transects and grid lines for these applications. However, the torpedo shape AUV lacks the ability to perform low-speed maneuvers in cluttered environments, such as autonomous exploration close to the seabed and around obstacles, MacIver et al. (2004). Thus, we consider an agile vehicle capable of movement in six degrees of freedom without any preference of direction.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
Bridges are valuable assets of every nation. They deteriorate with age and often are subjected to additional loads or different load patterns than originally designed for. These changes in loads can cause localized distress and may result in bridge failure if not corrected in time. Early detection of damage and appropriate retrofitting will aid in preventing bridge failures. Large amounts of money are spent in bridge maintenance all around the world. A need exists for a reliable technology capable of monitoring the structural health of bridges, thereby ensuring they operate safely and efficiently during the whole intended lives. Monitoring of bridges has been traditionally done by means of visual inspection. Visual inspection alone is not capable of locating and identifying all signs of damage, hence a variety of structural health monitoring (SHM) techniques is used regularly nowadays to monitor performance and to assess condition of bridges for early damage detection. Acoustic emission (AE) is one technique that is finding an increasing use in SHM applications of bridges all around the world. The chapter starts with a brief introduction to structural health monitoring and techniques commonly used for monitoring purposes. Acoustic emission technique, wave nature of AE phenomenon, previous applications and limitations and challenges in the use as a SHM technique are also discussed. Scope of the project and work carried out will be explained, followed by some recommendations of work planned in future.
Resumo:
One of the main challenges of slow speed machinery condition monitoring is that the energy generated from an incipient defect is too weak to be detected by traditional vibration measurements due to its low impact energy. Acoustic emission (AE) measurement is an alternative for this as it has the ability to detect crack initiations or rubbing between moving surfaces. However, AE measurement requires high sampling frequency and consequently huge amount of data are obtained to be processed. It also requires expensive hardware to capture those data, storage and involves signal processing techniques to retrieve valuable information on the state of the machine. AE signal has been utilised for early detection of defects in bearings and gears. This paper presents an online condition monitoring (CM) system for slow speed machinery, which attempts to overcome those challenges. The system incorporates relevant signal processing techniques for slow speed CM which include noise removal techniques to enhance the signal-to-noise and peak-holding down sampling to reduce the burden of massive data handling. The analysis software works under Labview environment, which enables online remote control of data acquisition, real-time analysis, offline analysis and diagnostic trending. The system has been fully implemented on a site machine and contributing significantly to improve the maintenance efficiency and provide a safer and reliable operation.
Resumo:
This is the first outdoor test of small-scale dye sensitized solar cells (DSC) powering a standalone nanosensor node. A solar cell test station (SCTS) has been developed using standard DSC to power a gas nanosensor, a radio transmitter, and the control electronics (CE) for battery charging. The station is remotely monitored through wired (Ethernet cable) or wireless connection (radio transmitter) in order to evaluate in real time the performance of the solar cells powering a nanosensor and a transmitter under different weather conditions. We analyze trends of energy conversion efficiency after 60 days of operation. The 408 cm2 active surface module produces enough energy to power a gas nanosensor and a radio transmitter during the day and part of the night. Also, by using a variable programmable load we keep the system working on the maximum power point (MPP) quantifying the total energy generated and stored in a battery. Although this technology is at an early stage of development, these experiments provide useful data for future outdoor applications such as nanosensor network nodes.
Resumo:
Suburbanisation has been internationally a major phenomenon in the last decades. Suburb-to-suburb routes are nowadays the most widespread road journeys; and this resulted in an increment of distances travelled, particularly on faster suburban highways. The design of highways tends to over-simplify the driving task and this can result in decreased alertness. Driving behaviour is consequently impaired and drivers are then more likely to be involved in road crashes. This is particularly dangerous on highways where the speed limit is high. While effective countermeasures to this decrement in alertness do not currently exist, the development of in-vehicle sensors opens avenues for monitoring driving behaviour in real-time. The aim of this study is to evaluate in real-time the level of alertness of the driver through surrogate measures that can be collected from in-vehicle sensors. Slow EEG activity is used as a reference to evaluate driver's alertness. Data are collected in a driving simulator instrumented with an eye tracking system, a heart rate monitor and an electrodermal activity device (N=25 participants). Four different types of highways (driving scenario of 40 minutes each) are implemented through the variation of the road design (amount of curves and hills) and the roadside environment (amount of buildings and traffic). We show with Neural Networks that reduced alertness can be detected in real-time with an accuracy of 92% using lane positioning, steering wheel movement, head rotation, blink frequency, heart rate variability and skin conductance level. Such results show that it is possible to assess driver's alertness with surrogate measures. Such methodology could be used to warn drivers of their alertness level through the development of an in-vehicle device monitoring in real-time drivers' behaviour on highways, and therefore it could result in improved road safety.
Resumo:
This paper proposes a novel approach for identifying risks in executable business processes and detecting them at run time. The approach considers risks in all phases of the business process management lifecycle, and is realized via a distributed, sensor-based architecture. At design-time, sensors are defined to specify risk conditions which when fulfilled, are a likely indicator of faults to occur. Both historical and current execution data can be used to compose such conditions. At run-time, each sensor independently notifies a sensor manager when a risk is detected. In turn, the sensor manager interacts with the monitoring component of a process automation suite to prompt the results to the user who may take remedial actions. The proposed architecture has been implemented in the YAWL system and its performance has been evaluated in practice.