955 resultados para DETECTION CELL
Resumo:
One of the fundamental motivations underlying computational cell biology is to gain insight into the complicated dynamical processes taking place, for example, on the plasma membrane or in the cytosol of a cell. These processes are often so complicated that purely temporal mathematical models cannot adequately capture the complex chemical kinetics and transport processes of, for example, proteins or vesicles. On the other hand, spatial models such as Monte Carlo approaches can have very large computational overheads. This chapter gives an overview of the state of the art in the development of stochastic simulation techniques for the spatial modelling of dynamic processes in a living cell.
Resumo:
Endocytosis is the process by which cells internalise molecules including nutrient proteins from the extracellular media. In one form, macropinocytosis, the membrane at the cell surface ruffles and folds over to give rise to an internalised vesicle. Negatively charged phospholipids within the membrane called phosphoinositides then undergo a series of transformations that are critical for the correct trafficking of the vesicle within the cell, and which are often pirated by pathogens such as Salmonella. Advanced fluorescent video microscopy imaging now allows the detailed observation and quantification of these events in live cells over time. Here we use these observations as a basis for building differential equation models of the transformations. An initial investigation of these interactions was modelled with reaction rates proportional to the sum of the concentrations of the individual constituents. A first order linear system for the concentrations results. The structure of the system enables analytical expressions to be obtained and the problem becomes one of determining the reaction rates which generate the observed data plots. We present results with reaction rates which capture the general behaviour of the reactions so that we now have a complete mathematical model of phosphoinositide transformations that fits the experimental observations. Some excellent fits are obtained with modulated exponential functions; however, these are not solutions of the linear system. The question arises as to how the model may be modified to obtain a system whose solution provides a more accurate fit.
Resumo:
The construction of timelines of computer activity is a part of many digital investigations. These timelines of events are composed of traces of historical activity drawn from system logs and potentially from evidence of events found in the computer file system. A potential problem with the use of such information is that some of it may be inconsistent and contradictory thus compromising its value. This work introduces a software tool (CAT Detect) for the detection of inconsistency within timelines of computer activity. We examine the impact of deliberate tampering through experiments conducted with our prototype software tool. Based on the results of these experiments, we discuss techniques which can be employed to deal with such temporal inconsistencies.
Resumo:
Unusual event detection in crowded scenes remains challenging because of the diversity of events and noise. In this paper, we present a novel approach for unusual event detection via sparse reconstruction of dynamic textures over an overcomplete basis set, with the dynamic texture described by local binary patterns from three orthogonal planes (LBPTOP). The overcomplete basis set is learnt from the training data where only the normal items observed. In the detection process, given a new observation, we compute the sparse coefficients using the Dantzig Selector algorithm which was proposed in the literature of compressed sensing. Then the reconstruction errors are computed, based on which we detect the abnormal items. Our application can be used to detect both local and global abnormal events. We evaluate our algorithm on UCSD Abnormality Datasets for local anomaly detection, which is shown to outperform current state-of-the-art approaches, and we also get promising results for rapid escape detection using the PETS2009 dataset.
Resumo:
Experimental action potential (AP) recordings in isolated ventricular myoctes display significant temporal beat-to-beat variability in morphology and duration. Furthermore, significant cell-to-cell differences in AP also exist even for isolated cells originating from the same region of the same heart. However, current mathematical models of ventricular AP fail to replicate the temporal and cell-to-cell variability in AP observed experimentally. In this study, we propose a novel mathematical framework for the development of phenomenological AP models capable of capturing cell-to-cell and temporal variabilty in cardiac APs. A novel stochastic phenomenological model of the AP is developed, based on the deterministic Bueno-Orovio/Fentonmodel. Experimental recordings of AP are fit to the model to produce AP models of individual cells from the apex and the base of the guinea-pig ventricles. Our results show that the phenomenological model is able to capture the considerable differences in AP recorded from isolated cells originating from the location. We demonstrate the closeness of fit to the available experimental data which may be achieved using a phenomenological model, and also demonstrate the ability of the stochastic form of the model to capture the observed beat-to-beat variablity in action potential duration.
Resumo:
Modelling events in densely crowded environments remains challenging, due to the diversity of events and the noise in the scene. We propose a novel approach for anomalous event detection in crowded scenes using dynamic textures described by the Local Binary Patterns from Three Orthogonal Planes (LBP-TOP) descriptor. The scene is divided into spatio-temporal patches where LBP-TOP based dynamic textures are extracted. We apply hierarchical Bayesian models to detect the patches containing unusual events. Our method is an unsupervised approach, and it does not rely on object tracking or background subtraction. We show that our approach outperforms existing state of the art algorithms for anomalous event detection in UCSD dataset.
Resumo:
Solar ultraviolet (UV) radiation causes a range of skin disorders as well as affecting vision and the immune system. It also inhibits development of plants and animals. UV radiation monitoring is used routinely in some locations in order to alert the population to harmful solar radiation levels. There is ongoing research to develop UV-selective-sensors [1–3]. A personal, inexpensive and simple UV-selective-sensor would be desirable to measure UV intensity exposure. A prototype of such a detector has been developed and evaluated in our laboratory. It comprises a sealed two-electrode photoelectrochemical cell (PEC) based on nanocrystalline TiO2. This abundant semiconducting oxide, which is innocuous and very sta-ble, is the subject of intense study at present due to its application in dye sensitized solar cells (DSSC) [4]. Since TiO2 has a wide band gap (EG = 3.0 eV for rutile and EG = 3.2 eV for anatase), it is inher-ently UV-selective, so that UV filters are not required. This further reduces the cost of the proposed photodetector in comparison with conventional silicon detectors. The PEC is a semiconductor–electrolyte device that generates a photovoltage when it is illuminated and a corresponding photocur-rent if the external circuit is closed. The device does not require external bias, and the short circuit current is generally a linear function of illumination intensity. This greatly simplifies the elec-trical circuit needed when using the PEC as a photodetector. DSSC technology, which is based on a PEC containing nanocrystalline TiO2 sensitized with a ruthenium dye, holds out the promise of solar cells that are significantly cheaper than traditional silicon solar cells. The UV-sensor proposed in this paper relies on the cre-ation of electron–hole pairs in the TiO2 by UV radiation, so that it would be even cheaper than a DSSC since no sensitizer dye is needed. Although TiO2 has been reported as a suitable material for UV sensing [3], to the best of our knowledge, the PEC configuration described in the present paper is a new approach. In the present study, a novel double-layer TiO2 structure has been investigated. Fabrication is based on a simple and inexpensive technique for nanostructured TiO2 deposition using microwave-activated chemical bath deposition (MW-CBD) that has been reported recently [5]. The highly transparent TiO2 (anatase) films obtained are densely packed, and they adhere very well to the transparent oxide (TCO) substrate [6]. These compact layers have been studied as contacting layers in double-layer TiO2 structures for DSSC since improvement of electron extraction at the TiO2–TCO interface is expected [7]. Here we compare devices incorporating a single mesoporous nanocrystalline TiO2 structure with devices based on a double structure in which a MW-CBD film is situated between the TCO and the mesoporous nanocrystalline TiO2 layer. Besides improving electron extraction, this film could also help to block recombination of electrons transferred to the TCO with oxidized species in the electrolyte, as has been reported in the case of DSSC for compact TiO2 films obtained by other deposition tech-niques [8,9]. The two types of UV-selective sensors were characterized in detail. The current voltage characteristics, spectral response, inten-sity dependence of short circuit current and response times were measured and analyzed in order to evaluate the potential of sealed mesoporous TiO2-based photoelectrochemical cells (PEC) as low cost personal UV-photodetectors.
Resumo:
Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.
Resumo:
We develop a new analytical solution for a reactive transport model that describes the steady-state distribution of oxygen subject to diffusive transport and nonlinear uptake in a sphere. This model was originally reported by Lin (Journal of Theoretical Biology, 1976 v60, pp449–457) to represent the distribution of oxygen inside a cell and has since been studied extensively by both the numerical analysis and formal analysis communities. Here we extend these previous studies by deriving an analytical solution to a generalized reaction-diffusion equation that encompasses Lin’s model as a particular case. We evaluate the solution for the parameter combinations presented by Lin and show that the new solutions are identical to a grid-independent numerical approximation.
Resumo:
Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.
Resumo:
This paper presents a preliminary flight test based detection range versus false alarm performance characterisation of a morphological-hidden Markov model filtering approach to vision-based airborne dim-target collision detection. On the basis of compelling in-flight collision scenario data, we calculate system operating characteristic (SOC) curves that concisely illustrate the detection range versus false alarm rate performance design trade-offs. These preliminary SOC curves provide a more complete dim-target detection performance description than previous studies (due to the experimental difficulties involved, previous studies have been limited to very short flight data sample sets and hence have not been able to quantify false alarm behaviour). The preliminary investigation here is based on data collected from 4 controlled collision encounters and supporting non-target flight data. This study suggests head-on detection ranges of approximately 2.22 km under blue sky background conditions (1.26 km in cluttered background conditions), whilst experiencing false alarms at a rate less than 1.7 false alarms/hour (ie. less than once every 36 minutes). Further data collection is currently in progress.
Resumo:
It is recognised that individuals do not always respond honestly when completing psychological tests. One of the foremost issues for research in this area is the inability to detect individuals attempting to fake. While a number of strategies have been identified in faking, a commonality of these strategies is the latent role of long term memory. Seven studies were conducted in order to examine whether it is possible to detect the activation of faking related cognitions using a lexical decision task. Study 1 found that engagement with experiential processing styles predicted the ability to fake successfully, confirming the role of associative processing styles in faking. After identifying appropriate stimuli for the lexical decision task (Studies 2A and 2B), Studies 3 to 5 examined whether a cognitive state of faking could be primed and subsequently identified, using a lexical decision task. Throughout the course of these studies, the experimental methodology was increasingly refined in an attempt to successfully identify the relevant priming mechanisms. The results were consistent and robust throughout the three priming studies: faking good on a personality test primed positive faking related words in the lexical decision tasks. Faking bad, however, did not result in reliable priming of negative faking related cognitions. To more completely address potential issues with the stimuli and the possible role of affective priming, two additional studies were conducted. Studies 6A and 6B revealed that negative faking related words were more arousing than positive faking related words, and that positive faking related words were more abstract than negative faking related words and neutral words. Study 7 examined whether the priming effects evident in the lexical decision tasks occurred as a result of an unintentional mood induction while faking the psychological tests. Results were equivocal in this regard. This program of research aligned the fields of psychological assessment and cognition to inform the preliminary development and validation of a new tool to detect faking. Consequently, an implicit technique to identify attempts to fake good on a psychological test has been identified, using long established and robust cognitive theories in a novel and innovative way. This approach represents a new paradigm for the detection of individuals responding strategically to psychological testing. With continuing development and validation, this technique may have immense utility in the field of psychological assessment.
Resumo:
Microbial pollution in water periodically affects human health in Australia, particularly in times of drought and flood. There is an increasing need for the control of waterborn microbial pathogens. Methods, allowing the determination of the origin of faecal contamination in water, are generally referred to as Microbial Source Tracking (MST). Various approaches have been evaluated as indicatorsof microbial pathogens in water samples, including detection of different microorganisms and various host-specific markers. However, until today there have been no universal MST methods that could reliably determine the source (human or animal) of faecal contamination. Therefore, the use of multiple approaches is frequently advised. MST is currently recognised as a research tool, rather than something to be included in routine practices. The main focus of this research was to develop novel and universally applicable methods to meet the demands for MST methods in routine testing of water samples. Escherichia coli was chosen initially as the object organism for our studies as, historically and globally, it is the standard indicator of microbial contamination in water. In this thesis, three approaches are described: single nucleotide polymorphism (SNP) genotyping, clustered regularly interspaced short palindromic repeats (CRISPR) screening using high resolution melt analysis (HRMA) methods and phage detection development based on CRISPR types. The advantage of the combination SNP genotyping and CRISPR genes has been discussed in this study. For the first time, a highly discriminatory single nucleotide polymorphism interrogation of E. coli population was applied to identify the host-specific cluster. Six human and one animal-specific SNP profile were revealed. SNP genotyping was successfully applied in the field investigations of the Coomera watershed, South-East Queensland, Australia. Four human profiles [11], [29], [32] and [45] and animal specific SNP profile [7] were detected in water. Two human-specific profiles [29] and [11] were found to be prevalent in the samples over a time period of years. The rainfall (24 and 72 hours), tide height and time, general land use (rural, suburban), seasons, distance from the river mouth and salinity show a lack of relashionship with the diversity of SNP profiles present in the Coomera watershed (p values > 0.05). Nevertheless, SNP genotyping method is able to identify and distinquish between human- and non-human specific E. coli isolates in water sources within one day. In some samples, only mixed profiles were detected. To further investigate host-specificity in these mixed profiles CRISPR screening protocol was developed, to be used on the set of E. coli, previously analysed for SNP profiles. CRISPR loci, which are the pattern of previous DNA coliphages attacks, were considered to be a promising tool for detecting host-specific markers in E. coli. Spacers in CRISPR loci could also reveal the dynamics of virulence in E. coli as well in other pathogens in water. Despite the fact that host-specificity was not observed in the set of E. coli analysed, CRISPR alleles were shown to be useful in detection of the geographical site of sources. HRMA allows determination of ‘different’ and ‘same’ CRISPR alleles and can be introduced in water monitoring as a cost-effective and rapid method. Overall, we show that the identified human specific SNP profiles [11], [29], [32] and [45] can be useful as marker genotypes globally for identification of human faecal contamination in water. Developed in the current study, the SNP typing approach can be used in water monitoring laboratories as an inexpensive, high-throughput and easy adapted protocol. The unique approach based on E. coli spacers for the search for unknown phage was developed to examine the host-specifity in phage sequences. Preliminary experiments on the recombinant plasmids showed the possibility of using this method for recovering phage sequences. Future studies will determine the host-specificity of DNA phage genotyping as soon as first reliable sequences can be acquired. No doubt, only implication of multiple approaches in MST will allow identification of the character of microbial contamination with higher confidence and readability.
Resumo:
Influenza is a widespread disease occurring in seasonal epidemics, and each year is responsible for up to 500,000 deaths worldwide. Influenza can develop into strains which cause severe symptoms and high mortality rates, and could potentially reach pandemic status if the virus’ properties allow easy transmission. Influenza is transmissible via contact with the virus, either directly (infected people) or indirectly (contaminated objects); via reception of large droplets over short distances (one metre or less); or through inhalation of aerosols containing the virus expelled by infected individuals during respiratory activities, that can remain suspended in the air and travel distances of more than one metre (the aerosol route). Aerosol transmission of viruses involves three stages: production of the droplets containing viruses; transport of the droplets and ability of a virus to remain intact and infectious; and reception of the droplets (via inhalation). Our understanding of the transmission of influenza viruses via the aerosol route is poor, and thus our ability to prevent a widespread outbreak is limited. This study explored the fate of viruses in droplets by investigating the effects of some physical factors on the recovery of both a bacteriophage model and influenza virus. Experiments simulating respiratory droplets were carried out using different types of droplets, generated from a commonly used water-like matrix, and also from an ‘artificial mucous’ matrix which was used to more closely resemble respiratory fluids. To detect viruses in droplets, we used the traditional plaque assay techniques, and also a sensitive, quantitative PCR assay specifically developed for this study. Our results showed that the artificial mucous suspension enhanced the recovery of infectious bacteriophage. We were able to report detection limits of infectious bacteriophage (no bacteriophage was detected by the plaque assay when aerosolised from a suspension of 103 PFU/mL, for three of the four droplet types tested), and that bacteriophage could remain infectious in suspended droplets for up to 20 minutes. We also showed that the nested real-time PCR assay was able to detect the presence of bacteriophage RNA where the plaque assay could not detect any intact particles. Finally, when applying knowledge from the bacteriophage experiments, we reported the quantitative recoveries of influenza viruses in droplets, which were more consistent and stable than we had anticipated. Influenza viruses can be detected up to 20 minutes (after aerosolisation) in suspended aerosols and possibly beyond. It also was detectable from nebulising suspensions with relatively low concentrations of viruses.
Resumo:
Many existing schemes for malware detection are signature-based. Although they can effectively detect known malwares, they cannot detect variants of known malwares or new ones. Most network servers do not expect executable code in their in-bound network traffic, such as on-line shopping malls, Picasa, Youtube, Blogger, etc. Therefore, such network applications can be protected from malware infection by monitoring their ports to see if incoming packets contain any executable contents. This paper proposes a content-classification scheme that identifies executable content in incoming packets. The proposed scheme analyzes the packet payload in two steps. It first analyzes the packet payload to see if it contains multimedia-type data (such as . If not, then it classifies the payload either as text-type (such as or executable. Although in our experiments the proposed scheme shows a low rate of false negatives and positives (4.69% and 2.53%, respectively), the presence of inaccuracies still requires further inspection to efficiently detect the occurrence of malware. In this paper, we also propose simple statistical and combinatorial analysis to deal with false positives and negatives.