947 resultados para input method
Resumo:
This paper proposes a new method for online secondary path modeling in feedback active noise control (ANC) systems. In practical cases, the secondary path is usually time varying. For these cases, online modeling of secondary path is required to ensure convergence of the system. In literature the secondary path estimation is usually performed offline, prior to online modeling, where in the proposed system there is no need for using offline estimation. The proposed method consists of two steps: a noise controller which is based on an FxLMS algorithm, and a variable step size (VSS) LMS algorithm which is used to adapt the modeling filter with the secondary path. In order to increase performance of the algorithm in a faster convergence and accurate performance, we stop the VSS-LMS algorithm at the optimum point. The results of computer simulation shown in this paper indicate effectiveness of the proposed method.
Resumo:
Visual abnormalities, both at the sensory input and the higher interpretive levels, have been associated with many of the symptoms of schizophrenia. Individuals with schizophrenia typically experience distortions of sensory perception, resulting in perceptual hallucinations and delusions that are related to the observed visual deficits. Disorganised speech, thinking and behaviour are commonly experienced by sufferers of the disorder, and have also been attributed to perceptual disturbances associated with anomalies in visual processing. Compounding these issues are marked deficits in cognitive functioning that are observed in approximately 80% of those with schizophrenia. Cognitive impairments associated with schizophrenia include: difficulty with concentration and memory (i.e. working, visual and verbal), an impaired ability to process complex information, response inhibition and deficits in speed of processing, visual and verbal learning. Deficits in sustained attention or vigilance, poor executive functioning such as poor reasoning, problem solving, and social cognition, are all influenced by impaired visual processing. These symptoms impact on the internal perceptual world of those with schizophrenia, and hamper their ability to navigate their external environment. Visual processing abnormalities in schizophrenia are likely to worsen personal, social and occupational functioning. Binocular rivalry provides a unique opportunity to investigate the processes involved in visual awareness and visual perception. Binocular rivalry is the alternation of perceptual images that occurs when conflicting visual stimuli are presented to each eye in the same retinal location. The observer perceives the opposing images in an alternating fashion, despite the sensory input to each eye remaining constant. Binocular rivalry tasks have been developed to investigate specific parts of the visual system. The research presented in this Thesis provides an explorative investigation into binocular rivalry in schizophrenia, using the method of Pettigrew and Miller (1998) and comparing individuals with schizophrenia to healthy controls. This method allows manipulations to the spatial and temporal frequency, luminance contrast and chromaticity of the visual stimuli. Manipulations to the rival stimuli affect the rate of binocular rivalry alternations and the time spent perceiving each image (dominance duration). Binocular rivalry rate and dominance durations provide useful measures to investigate aspects of visual neural processing that lead to the perceptual disturbances and cognitive dysfunction attributed to schizophrenia. However, despite this promise the binocular rivalry phenomenon has not been extensively explored in schizophrenia to date. Following a review of the literature, the research in this Thesis examined individual variation in binocular rivalry. The initial study (Chapter 2) explored the effect of systematically altering the properties of the stimuli (i.e. spatial and temporal frequency, luminance contrast and chromaticity) on binocular rivalry rate and dominance durations in healthy individuals (n=20). The findings showed that altering the stimuli with respect to temporal frequency and luminance contrast significantly affected rate. This is significant as processing of temporal frequency and luminance contrast have consistently been demonstrated to be abnormal in schizophrenia. The current research then explored binocular rivalry in schizophrenia. The primary research question was, "Are binocular rivalry rates and dominance durations recorded in participants with schizophrenia different to those of the controls?" In this second study binocular rivalry data that were collected using low- and highstrength binocular rivalry were compared to alternations recorded during a monocular rivalry task, the Necker Cube task to replicate and advance the work of Miller et al., (2003). Participants with schizophrenia (n=20) recorded fewer alternations (i.e. slower alternation rates) than control participants (n=20) on both binocular rivalry tasks, however no difference was observed between the groups on the Necker cube task. Magnocellular and parvocellular visual pathways, thought to be abnormal in schizophrenia, were also investigated in binocular rivalry. The binocular rivalry stimuli used in this third study (Chapter 4) were altered to bias the task for one of these two pathways. Participants with schizophrenia recorded slower binocular rivalry rates than controls in both binocular rivalry tasks. Using a ‘within subject design’, binocular rivalry data were compared to data collected from a backwardmasking task widely accepted to bias both these pathways. Based on these data, a model of binocular rivalry, based on the magnocellular and parvocellular pathways that contribute to the dorsal and ventral visual streams, was developed. Binocular rivalry rates were compared with performance on the Benton’s Judgment of Line Orientation task, in individuals with schizophrenia compared to healthy controls (Chapter 5). The Benton’s Judgment of Line Orientation task is widely accepted to be processed within the right cerebral hemisphere, making it an appropriate task to investigate the role of the cerebral hemispheres in binocular rivalry, and to investigate the inter-hemispheric switching hypothesis of binocular rivalry proposed by Pettigrew and Miller (1998, 2003). The data were suggestive of intra-hemispheric rather than an inter-hemispheric visual processing in binocular rivalry. Neurotransmitter involvement in binocular rivalry, backward masking and Judgment of Line Orientation in schizophrenia were investigated using a genetic indicator of dopamine receptor distribution and functioning; the presence of the Taq1 allele of the dopamine D2 receptor (DRD2) receptor gene. This final study (Chapter 6) explored whether the presence of the Taq1 allele of the DRD2 receptor gene, and thus, by inference the distribution of dopamine receptors and dopamine function, accounted for the large individual variation in binocular rivalry. The presence of the Taq1 allele was associated with slower binocular rivalry rates or poorer performance in the backward masking and Judgment of Line Orientation tasks seen in the group with schizophrenia. This Thesis has contributed to what is known about binocular rivalry in schizophrenia. Consistently slower binocular rivalry rates were observed in participants with schizophrenia, indicating abnormally-slow visual processing in this group. These data support previous studies reporting visual processing abnormalities in schizophrenia and suggest that a slow binocular rivalry rate is not a feature specific to bipolar disorder, but may be a feature of disorders with psychotic features generally. The contributions of the magnocellular or dorsal pathways and parvocellular or ventral pathways to binocular rivalry, and therefore to perceptual awareness, were investigated. The data presented supported the view that the magnocellular system initiates perceptual awareness of an image and the parvocellular system maintains the perception of the image, making it available to higher level processing occurring within the cortical hemispheres. Abnormal magnocellular and parvocellular processing may both contribute to perceptual disturbances that ultimately contribute to the cognitive dysfunction associated with schizophrenia. An alternative model of binocular rivalry based on these observations was proposed.
Resumo:
We directly constructed reduced graphene oxide–titanium oxide nanotube (RGO–TNT) film using a single-step, combined electrophoretic deposition–anodization (CEPDA) method. This method, based on the simultaneous anodic growth of tubular TiO2 and the electrophoretic-driven motion of RGO, allowed the formation of an effective interface between the two components, thus improving the electron transfer kinetics. Composites of these graphitic carbons with different levels of oxygen-containing groups, electron conductivity and interface reaction time were investigated; a fine balance of these parameters was achieved.
Resumo:
In order to develop more inclusive products and services, designers need a means of assessing the inclusivity of existing products and new concepts. Following previous research on the development of scales for inclusive design at University of Cambridge, Engineering Design Centre (EDC) [1], this paper presents the latest version of the exclusion audit method. For a specific product interaction, this estimates the proportion of the Great British population who would be excluded from using a product or service, due to the demands the product places on key user capabilities. A critical part of the method involves rating of the level of demand placed by a task on a range of key user capabilities, so the procedure to perform this assessment was operationalised and then its reliability was tested with 31 participants. There was no evidence that participants rated the same demands consistently. The qualitative results from the experiment suggest that the consistency of participants’ demand level ratings could be significantly improved if the audit materials and their instructions better guided the participant through the judgement process.
Resumo:
Recent empirical studies of gender discrimination point to the importance of accurately controlling for accumulated labour market experience. Unfortunately in Australia, most data sets do not include information on actual experience. The current paper using data from the National Social Science Survey 1984, examines the efficacy of imputing female labour market experience via the Zabalza and Arrufat (1985) method. The results suggest that the method provides a more accurate measure of experience than that provided by the traditional Mincer proxy. However, the imputation method is sensitive to the choice of identification restrictions. We suggest a novel alternative to a choice between arbitrary restrictions.
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.
Resumo:
Biological systems involving proliferation, migration and death are observed across all scales. For example, they govern cellular processes such as wound-healing, as well as the population dynamics of groups of organisms. In this paper, we provide a simplified method for correcting mean-field approximations of volume-excluding birth-death-movement processes on a regular lattice. An initially uniform distribution of agents on the lattice may give rise to spatial heterogeneity, depending on the relative rates of proliferation, migration and death. Many frameworks chosen to model these systems neglect spatial correlations, which can lead to inaccurate predictions of their behaviour. For example, the logistic model is frequently chosen, which is the mean-field approximation in this case. This mean-field description can be corrected by including a system of ordinary differential equations for pair-wise correlations between lattice site occupancies at various lattice distances. In this work we discuss difficulties with this method and provide a simplication, in the form of a partial differential equation description for the evolution of pair-wise spatial correlations over time. We test our simplified model against the more complex corrected mean-field model, finding excellent agreement. We show how our model successfully predicts system behaviour in regions where the mean-field approximation shows large discrepancies. Additionally, we investigate regions of parameter space where migration is reduced relative to proliferation, which has not been examined in detail before, and our method is successful at correcting the deviations observed in the mean-field model in these parameter regimes.
Resumo:
Recent fire research into the behaviour of light gauge steel frame (LSF) wall systems has devel-oped fire design rules based on Australian and European cold-formed steel design standards, AS/NZS 4600 and Eurocode 3 Part 1.3. However, these design rules are complex since the LSF wall studs are subjected to non-uniform elevated temperature distributions when the walls are exposed to fire from one side. Therefore this paper proposes an alternative design method for routine predictions of fire resistance rating of LSF walls. In this method, suitable equations are recommended first to predict the idealised stud time-temperature pro-files of eight different LSF wall configurations subject to standard fire conditions based on full scale fire test results. A new set of equations was then proposed to find the critical hot flange (failure) temperature for a giv-en load ratio for the same LSF wall configurations with varying steel grades and thickness. These equations were developed based on detailed finite element analyses that predicted the axial compression capacities and failure times of LSF wall studs subject to non-uniform temperature distributions with varying steel grades and thicknesses. This paper proposes a simple design method in which the two sets of equations developed for time-temperature profiles and critical hot flange temperatures are used to find the failure times of LSF walls. The proposed method was verified by comparing its predictions with the results from full scale fire tests and finite element analyses. This paper presents the details of this study including the finite element models of LSF wall studs, the results from relevant fire tests and finite element analyses, and the proposed equations.
Resumo:
Authenticated Encryption (AE) is the cryptographic process of providing simultaneous confidentiality and integrity protection to messages. This approach is more efficient than applying a two-step process of providing confidentiality for a message by encrypting the message, and in a separate pass providing integrity protection by generating a Message Authentication Code (MAC). AE using symmetric ciphers can be provided by either stream ciphers with built in authentication mechanisms or block ciphers using appropriate modes of operation. However, stream ciphers have the potential for higher performance and smaller footprint in hardware and/or software than block ciphers. This property makes stream ciphers suitable for resource constrained environments, where storage and computational power are limited. There have been several recent stream cipher proposals that claim to provide AE. These ciphers can be analysed using existing techniques that consider confidentiality or integrity separately; however currently there is no existing framework for the analysis of AE stream ciphers that analyses these two properties simultaneously. This thesis introduces a novel framework for the analysis of AE using stream cipher algorithms. This thesis analyzes the mechanisms for providing confidentiality and for providing integrity in AE algorithms using stream ciphers. There is a greater emphasis on the analysis of the integrity mechanisms, as there is little in the public literature on this, in the context of authenticated encryption. The thesis has four main contributions as follows. The first contribution is the design of a framework that can be used to classify AE stream ciphers based on three characteristics. The first classification applies Bellare and Namprempre's work on the the order in which encryption and authentication processes take place. The second classification is based on the method used for accumulating the input message (either directly or indirectly) into the into the internal states of the cipher to generate a MAC. The third classification is based on whether the sequence that is used to provide encryption and authentication is generated using a single key and initial vector, or two keys and two initial vectors. The second contribution is the application of an existing algebraic method to analyse the confidentiality algorithms of two AE stream ciphers; namely SSS and ZUC. The algebraic method is based on considering the nonlinear filter (NLF) of these ciphers as a combiner with memory. This method enables us to construct equations for the NLF that relate the (inputs, outputs and memory of the combiner) to the output keystream. We show that both of these ciphers are secure from this type of algebraic attack. We conclude that using a keydependent SBox in the NLF twice, and using two different SBoxes in the NLF of ZUC, prevents this type of algebraic attack. The third contribution is a new general matrix based model for MAC generation where the input message is injected directly into the internal state. This model describes the accumulation process when the input message is injected directly into the internal state of a nonlinear filter generator. We show that three recently proposed AE stream ciphers can be considered as instances of this model; namely SSS, NLSv2 and SOBER-128. Our model is more general than a previous investigations into direct injection. Possible forgery attacks against this model are investigated. It is shown that using a nonlinear filter in the accumulation process of the input message when either the input message or the initial states of the register is unknown prevents forgery attacks based on collisions. The last contribution is a new general matrix based model for MAC generation where the input message is injected indirectly into the internal state. This model uses the input message as a controller to accumulate a keystream sequence into an accumulation register. We show that three current AE stream ciphers can be considered as instances of this model; namely ZUC, Grain-128a and Sfinks. We establish the conditions under which the model is susceptible to forgery and side-channel attacks.
Resumo:
In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the ‘zero-scan’ method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner’s x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
The ability of a piezoelectric transducer in energy conversion is rapidly expanding in several applications. Some of the industrial applications for which a high power ultrasound transducer can be used are surface cleaning, water treatment, plastic welding and food sterilization. Also, a high power ultrasound transducer plays a great role in biomedical applications such as diagnostic and therapeutic applications. An ultrasound transducer is usually applied to convert electrical energy to mechanical energy and vice versa. In some high power ultrasound system, ultrasound transducers are applied as a transmitter, as a receiver or both. As a transmitter, it converts electrical energy to mechanical energy while a receiver converts mechanical energy to electrical energy as a sensor for control system. Once a piezoelectric transducer is excited by electrical signal, piezoelectric material starts to vibrate and generates ultrasound waves. A portion of the ultrasound waves which passes through the medium will be sensed by the receiver and converted to electrical energy. To drive an ultrasound transducer, an excitation signal should be properly designed otherwise undesired signal (low quality) can deteriorate the performance of the transducer (energy conversion) and increase power consumption in the system. For instance, some portion of generated power may be delivered in unwanted frequency which is not acceptable for some applications especially for biomedical applications. To achieve better performance of the transducer, along with the quality of the excitation signal, the characteristics of the high power ultrasound transducer should be taken into consideration as well. In this regard, several simulation and experimental tests are carried out in this research to model high power ultrasound transducers and systems. During these experiments, high power ultrasound transducers are excited by several excitation signals with different amplitudes and frequencies, using a network analyser, a signal generator, a high power amplifier and a multilevel converter. Also, to analyse the behaviour of the ultrasound system, the voltage ratio of the system is measured in different tests. The voltage across transmitter is measured as an input voltage then divided by the output voltage which is measured across receiver. The results of the transducer characteristics and the ultrasound system behaviour are discussed in chapter 4 and 5 of this thesis. Each piezoelectric transducer has several resonance frequencies in which its impedance has lower magnitude as compared to non-resonance frequencies. Among these resonance frequencies, just at one of those frequencies, the magnitude of the impedance is minimum. This resonance frequency is known as the main resonance frequency of the transducer. To attain higher efficiency and deliver more power to the ultrasound system, the transducer is usually excited at the main resonance frequency. Therefore, it is important to find out this frequency and other resonance frequencies. Hereof, a frequency detection method is proposed in this research which is discussed in chapter 2. An extended electrical model of the ultrasound transducer with multiple resonance frequencies consists of several RLC legs in parallel with a capacitor. Each RLC leg represents one of the resonance frequencies of the ultrasound transducer. At resonance frequency the inductor reactance and capacitor reactance cancel out each other and the resistor of this leg represents power conversion of the system at that frequency. This concept is shown in simulation and test results presented in chapter 4. To excite a high power ultrasound transducer, a high power signal is required. Multilevel converters are usually applied to generate a high power signal but the drawback of this signal is low quality in comparison with a sinusoidal signal. In some applications like ultrasound, it is extensively important to generate a high quality signal. Several control and modulation techniques are introduced in different papers to control the output voltage of the multilevel converters. One of those techniques is harmonic elimination technique. In this technique, switching angles are chosen in such way to reduce harmonic contents in the output side. It is undeniable that increasing the number of the switching angles results in more harmonic reduction. But to have more switching angles, more output voltage levels are required which increase the number of components and cost of the converter. To improve the quality of the output voltage signal with no more components, a new harmonic elimination technique is proposed in this research. Based on this new technique, more variables (DC voltage levels and switching angles) are chosen to eliminate more low order harmonics compared to conventional harmonic elimination techniques. In conventional harmonic elimination method, DC voltage levels are same and only switching angles are calculated to eliminate harmonics. Therefore, the number of eliminated harmonic is limited by the number of switching cycles. In the proposed modulation technique, the switching angles and the DC voltage levels are calculated off-line to eliminate more harmonics. Therefore, the DC voltage levels are not equal and should be regulated. To achieve this aim, a DC/DC converter is applied to adjust the DC link voltages with several capacitors. The effect of the new harmonic elimination technique on the output quality of several single phase multilevel converters is explained in chapter 3 and 6 of this thesis. According to the electrical model of high power ultrasound transducer, this device can be modelled as parallel combinations of RLC legs with a main capacitor. The impedance diagram of the transducer in frequency domain shows it has capacitive characteristics in almost all frequencies. Therefore, using a voltage source converter to drive a high power ultrasound transducer can create significant leakage current through the transducer. It happens due to significant voltage stress (dv/dt) across the transducer. To remedy this problem, LC filters are applied in some applications. For some applications such as ultrasound, using a LC filter can deteriorate the performance of the transducer by changing its characteristics and displacing the resonance frequency of the transducer. For such a case a current source converter could be a suitable choice to overcome this problem. In this regard, a current source converter is implemented and applied to excite the high power ultrasound transducer. To control the output current and voltage, a hysteresis control and unipolar modulation are used respectively. The results of this test are explained in chapter 7.
Resumo:
Purpose: Electronic Portal Imaging Devices (EPIDs) are available with most linear accelerators (Amonuk, 2002), the current technology being amorphous silicon flat panel imagers. EPIDs are currently used routinely in patient positioning before radiotherapy treatments. There has been an increasing interest in using EPID technology tor dosimetric verification of radiotherapy treatments (van Elmpt, 2008). A straightforward technique involves the EPID panel being used to measure the fluence exiting the patient during a treatment which is then compared to a prediction of the fluence based on the treatment plan. However, there are a number of significant limitations which exist in this Method: Resulting in a limited proliferation ot this technique in a clinical environment. In this paper, we aim to present a technique of simulating IMRT fields using Monte Carlo to predict the dose in an EPID which can then be compared to the measured dose in the EPID. Materials: Measurements were made using an iView GT flat panel a-SI EPfD mounted on an Elekta Synergy linear accelerator. The images from the EPID were acquired using the XIS software (Heimann Imaging Systems). Monte Carlo simulations were performed using the BEAMnrc and DOSXVZnrc user codes. The IMRT fieids to be delivered were taken from the treatment planning system in DICOMRT format and converted into BEAMnrc and DOSXYZnrc input files using an in-house application (Crowe, 2009). Additionally. all image processing and analysis was performed using another in-house application written using the Interactive Data Language (IDL) (In Visual Information Systems). Comparison between the measured and Monte Carlo EPID images was performed using a gamma analysis (Low, 1998) incorporating dose and distance to agreement criteria. Results: The fluence maps recorded by the EPID were found to provide good agreement between measured and simulated data. Figure 1 shows an example of measured and simulated IMRT dose images and profiles in the x and y directions. "A technique for the quantitative evaluation of dose distributions", Med Phys, 25(5) May 1998 S. Crowe, 1. Kairn, A. Fielding, "The Development of a Monte Carlo system to verify Radiotherapy treatment dose calculations", Radiotherapy & Oncology, Volume 92, Supplement 1, August 2009, Pages S71-S71.
Resumo:
The invention relates to a method for monitoring user activity on a mobile device, comprising an input and an output unit, comprising the following steps preferably in the following order: detecting and / or logging user activity on said input unit, identifying a foreground running application, hashing of a user-interface-element management list of the foreground running application, and creating a screenshot comprising items displayed on said input unit. The invention also relates to a method for analyzing user activity at a server, comprising the following step: obtaining at least one of an information about detected and / or logged user activity, an information about a foreground running application, a hashed user-interface-element management list and a screenshot from a mobile device. Further, a computer program product is provided, comprising one or more computer readable media having computer executable instructions for performing the steps of at least one of the aforementioned methods.