999 resultados para Alarm processing
Resumo:
Abstract: Texture enhancement is an important component of image processing, with extensive application in science and engineering. The quality of medical images, quantified using the texture of the images, plays a significant role in the routine diagnosis performed by medical practitioners. Previously, image texture enhancement was performed using classical integral order differential mask operators. Recently, first order fractional differential operators were implemented to enhance images. Experiments conclude that the use of the fractional differential not only maintains the low frequency contour features in the smooth areas of the image, but also nonlinearly enhances edges and textures corresponding to high-frequency image components. However, whilst these methods perform well in particular cases, they are not routinely useful across all applications. To this end, we applied the second order Riesz fractional differential operator to improve upon existing approaches of texture enhancement. Compared with the classical integral order differential mask operators and other fractional differential operators, our new algorithms provide higher signal to noise values, which leads to superior image quality.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Death of a five year old from meningococcal disease in Darwin : a case of unprecedented public alarm
Resumo:
On Saturday, 25 October 1997 a five year old boy died in the Intensive Care Unit (ICU) of Royal Darwin Hospital (RDH) from meningococcal disease. While disease is expected throughout Australia during the late winter and early spring months, and deaths occur, this case was remarkable in the Northern Territory with respect to the unprecedented public response to media reports of the death.
Resumo:
The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Resumo:
The diagnostics of mechanical components operating in transient conditions is still an open issue, in both research and industrial field. Indeed, the signal processing techniques developed to analyse stationary data are not applicable or are affected by a loss of effectiveness when applied to signal acquired in transient conditions. In this paper, a suitable and original signal processing tool (named EEMED), which can be used for mechanical component diagnostics in whatever operating condition and noise level, is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED) and the analytical approach of the Hilbert transform. The proposed tool is able to supply diagnostic information on the basis of experimental vibrations measured in transient conditions. The tool has been originally developed in order to detect localized faults on bearings installed in high speed train traction equipments and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on spectral kurtosis or envelope analysis, which represent until now the landmark for bearings diagnostics.
Resumo:
The signal processing techniques developed for the diagnostics of mechanical components operating in stationary conditions are often not applicable or are affected by a loss of effectiveness when applied to signals measured in transient conditions. In this chapter, an original signal processing tool is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition, Minimum Entropy Deconvolution and the analytical approach of the Hilbert transform. The tool has been developed to detect localized faults on bearings of traction systems of high speed trains and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on envelope analysis or spectral kurtosis, which represent until now the landmark for bearings diagnostics.
Resumo:
Incorporating a learner’s level of cognitive processing into Learning Analytics presents opportunities for obtaining rich data on the learning process. We propose a framework called COPA that provides a basis for mapping levels of cognitive operation into a learning analytics system. We utilise Bloom’s taxonomy, a theoretically respected conceptualisation of cognitive processing, and apply it in a flexible structure that can be implemented incrementally and with varying degree of complexity within an educational organisation. We outline how the framework is applied, and its key benefits and limitations. Finally, we apply COPA to a University undergraduate unit, and demonstrate its utility in identifying key missing elements in the structure of the course.
Resumo:
Sugar cane processing sites are characterised by high sugar/hemicellulose levels, available moisture and warm conditions, and are relatively unexplored unique microbial environments. The PhyloChip microarray was used to investigate bacterial diversity and community composition in three Australian sugar cane processing plants. These ecosystems were highly complex and dominated by four main Phyla, Firmicutes (the most dominant), followed by Proteobacteria, Bacteroidetes, and Chloroflexi. Significant variation (p , 0.05) in community structure occurred between samples collected from ‘floor dump sediment’, ‘cooling tower water’, and ‘bagasse leachate’. Many bacterial Classes contributed to these differences, however most were of low numerical abundance. Separation in community composition was also linked to Classes of Firmicutes, particularly Bacillales, Lactobacillales and Clostridiales, whose dominance is likely to be linked to their physiology as ‘lactic acid bacteria’, capable of fermenting the sugars present. This process may help displace other bacterial taxa, providing a competitive advantage for Firmicutes bacteria.
Resumo:
Using Gray and McNaughton’s revised RST, this study investigated the extent to which the Behavioural Approach System (BAS) and the Fight-Flight-Freeze System (FFFS) influence the processing of gain-framed and loss-framed road safety messages and subsequent message acceptance. It was predicted that stronger BAS sensitivity and FFFS sensitivity would be associated with greater processing and acceptance of the gain-framed messages and loss-framed messages, respectively. Young drivers (N = 80, aged 17–25 years) viewed one of four road safety messages and completed a lexical decision task to assess message processing. Both self-report (e.g., Corr-Cooper RST-PQ) and behavioural measures (i.e., CARROT and Q-Task) were used to assess BAS and FFFS traits. Message acceptance was measured via self-report ratings of message effectiveness, behavioural intentions, attitudes and subsequent driving behaviour. The results are discussed in the context of the effect that differences in reward and punishment sensitivities may have on message processing and message acceptance.
Resumo:
The introduction of chalcone synthase A transgenes into petunia plants can result in degradation of chalcone synthase A RNAs and loss of chalcone synthase, a process called cosuppression or post-transcriptional gene silencing. Here we show that the RNA degradation is associated with changes in premRNA processing, i.e. loss of tissue specificity in transcript cleavage patterns, accumulation of unspliced molecules, and use of template-specific secondary poly(A) sites. These changes can also be observed at a lower level in leaves but not flowers of nontransgenic petunias. Based on this, a model is presented of how transgenes may disturb the carefully evolved, developmentally controlled post-transcriptional regulation of chalcone synthase gene expression by influencing the survival rate of the endogenous and their own mRNA.
Resumo:
We have previously reported that concanavalin A (ConA)-induced MMP-2 activation involves both transcriptional and non-transcriptional mechanisms. Here we examined the effects of calcium influx on MT1-MMP expression and MMP-2 activation in MDA-MB-231 cells. The calcium ionophore ionomycin caused a dose-dependent inhibition of ConA-induced MMP-2 activation, but had no effect on MT1-MMP mRNA levels. However, Western analysis revealed an accumulation of pro-MT1-MMP (63 kDa), indicating that ionomycin blocked the conversion of pro-MT1-MMP protein to the active 60 kDa form. This suggests that increased calcium levels inhibit the processing of MT1-MMP. This finding may help to elucidate the mechanism(s) which regulates MT1-MMP activation.
Resumo:
There has been a recent rapid expansion of the range of applications of low-temperature plasma processing in Si-based photovoltaic (PV) technologies. The desire to produce Si-based PV materials at an acceptable cost with consistent performance and reproducibility has stimulated a large number of major research and research infrastructure programs, and a rapidly increasing number of publications in the field of low-temperature plasma processing for Si photovoltaics. In this article, we introduce the low-temperature plasma sources for Si photovoltaic applications and discuss the effects of low-temperature plasma dissociation and deposition on the synthesis of Si-based thin films. We also examine the relevant growth mechanisms and plasma diagnostics, Si thin-film solar cells, Si heterojunction solar cells and silicon nitride materials for antireflection and surface passivation. Special attention is paid to the low-temperature plasma interactions with Si materials including hydrogen interaction, wafer cleaning, masked or mask-free surface texturization, the direct formation of p-n junction, and removal of phosphorus silicate glass or parasitic emitters. The chemical and physical interactions in such plasmas with Si surfaces are analyzed. Several examples of the plasma processes and techniques are selected to represent a variety of applications aimed at the improvement of Si-based solar cell performance. © 2014 Elsevier B.V.