66 resultados para Radon anomalies


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The position of housing demand and supply is not consistent. The Australian situation counters the experience demonstrated in many other parts of the world in the aftermath of the Global Financial Crisis, with residential housing prices proving particularly resilient. A seemingly inexorable housing demand remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of population growth fuelled by immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand level ensures problems related to housing affordability continue almost unabated. A significant, but less visible factor impacting housing affordability relates to holding costs. Although only one contributor in the housing affordability matrix, the nature and extent of holding cost impact requires elucidation: for example, the computation and methodology behind the calculation of holding costs varies widely - and in some instances completely ignored. In addition, ambiguity exists in terms of the inclusion of various elements that comprise holding costs, thereby affecting the assessment of their relative contribution. Such anomalies may be explained by considering that assessment is conducted over time in an ever-changing environment. A strong relationship with opportunity cost - in turn dependant inter alia upon prevailing inflation and / or interest rates - adds further complexity. By extending research in the general area of housing affordability, this thesis seeks to provide a detailed investigation of those elements related to holding costs specifically in the context of midsized (i.e. between 15-200 lots) greenfield residential property developments in South East Queensland. With the dimensions of holding costs and their influence over housing affordability determined, the null hypothesis H0 that holding costs are not passed on can be addressed. Arriving at these conclusions involves the development of robust economic and econometric models which seek to clarify the componentry impacts of holding cost elements. An explanatory sequential design research methodology has been adopted, whereby the compilation and analysis of quantitative data and the development of an economic model is informed by the subsequent collection and analysis of primarily qualitative data derived from surveying development related organisations. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Type 1 Neurofibromatosis (NF1) is a genetic disorder linked to mutations of the NF1 gene. Clinical symptoms are varied, but hallmark features of the disease include skin pigmentation anomalies (café au lait macules, skinfold freckling) and dermal neurofibromas. Method These dermal manifestations of NF1 have previously been reported in a mouse model where Nf1+/− mice are topically treated with dimethylbenz[a]anthracene (DMBA) and 12-O-tetradecanoylphorbol-13-acetate (TPA). We adopted this mouse model to test the protective effects of a nitroxide antioxidant, 5-carboxy-1,1,3,3-tetramethylisoindolin-2-yloxyl (CTMIO). Antioxidants have previously been shown to increase longevity in nf1-deficient fruitflies. Doses of 4 μM and 40 μM CTMIO provided ad libitum in drinking water were given to Nf1-deficient mice. Results Consistent with previous reports, Nf1-deficient mice showed a 4.7-fold increase in papilloma formation (P < 0.036). However, neither dose of CTMIO had any significant affect on papilloma formation. A non-significant decrease in skin pigmentation abnormalities was seen with 4 μM but not 40 μM CTMIO. Subsequent analysis of genomic DNA isolated from papillomas indicated that DMBA/TPA induced tumors did not exhibit a local loss of heterozygosity (LOH) at the Nf1 locus. Conclusion These data reveal that oral antioxidant therapy with CTMIO does not reduce tumor formation in a multistage cancer model, but also that this model does not feature LOH for Nf1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: As molecular and cytogenetic testing becomes increasingly sophisticated, more individuals are being diagnosed with rare chromosome disorders. Yet despite a burgeoning knowledge about biomedical aspects, little is known about implications for psychosocial development. The scant literature gives a general impression of deficits and adverse developmental outcomes. Method: Developmental data were obtained from two 16 year olds diagnosed with a rare chromosome disorder – a girl with 8p23.1 and a boy with 16q11.2q12.1. Measures of intellectual ability, academic achievement, and other aspects of functioning were administered at multiple time points from early childhood to adolescence. Results: Both adolescents experienced initial delays in motor and language development. Although the girl’s intelligence is assessed as being in the average range, she experiences difficulties with motor planning, spelling and writing. The boy has been diagnosed with a mild intellectual disability and demonstrates mild autistic features. Conclusions: The two case descriptions are in marked contrast to the published literature about these two chromosome anomalies. Both adolescents are developing much more positively than would be expected on the basis of the grim predictions of their paediatricians and the negative reports in the literature. It is concluded that, for most rare chromosome disorders, the range of possible developmental outcomes is currently unknown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stromatolites consist primarily of trapped and bound ambient sediment and/or authigenic mineral precipitates, but discrimination of the two constituents is difficult where stromatolites have a fine texture. We used laser ablation-inductively coupled plasma-mass spectrometry to measure trace element (rare earth element – REE, Y and Th) concentrations in both stromatolites (domical and branched) and closely associated particulate carbonate sediment in interspaces (spaces between columns or branches) from bioherms within the Neoproterozoic Bitter Springs Formation, central Australia. Our high resolution sampling allows discrimination of shale-normalised REE patterns between carbonate in stromatolites and immediately adjacent, fine-grained ambient particulate carbonate sediment from interspaces. Whereas all samples show similar negative La and Ce anomalies, positive Gd anomalies and chondritic Y/Ho ratios, the stromatolites and non-stromatolite sediment are distinguishable on the basis of consistently elevated light REEs (LREEs) in the stromatolitic laminae and relatively depleted LREEs in the particulate sediment samples. Additionally, concentrations of the lithophile element Th are higher in ambient sediment samples than in stromatolites, consistent with accumulation of some fine siliciclastic detrital material in the ambient sediment but a near absence in the stromatolites. These findings are consistent with the stromatolites consisting dominantly of in situ carbonate precipitates rather than trapped and bound ambient sediment. Hence, high resolution trace element (REE + Y, Th) geochemistry can discriminate fine-grained carbonates in these stromatolites from coeval non-stromatolitic carbonate sediment and demonstrates that the sampled stromatolites formed primarily from in situ precipitation, presumably within microbial mats/biofilms, rather than by trapping and binding of ambient sediment. Identification of the source of fine carbonate in stromatolites is significant, because if it is not too heavily contaminated by trapped ambient sediment, it may contain geochemical biosignatures and/or direct evidence of the local water chemistry in which the precipitates formed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Security cues found in web browsers are meant to alert users to potential online threats, yet many studies demonstrate that security indicators are largely ineffective in this regard. Those studies have depended upon self-reporting of subjects' use or aggregate experimentation that correlate responses to sites with and without indicators. We report on a laboratory experiment using eye-tracking to follow the behavior of self-identified computer experts as they share information across popular social media websites. The use of eye-tracking equipment allows us to explore possible behavioral differences in the way experts perceive web browser security cues, as opposed to non-experts. Unfortunately, due to the use of self-identified experts, technological issues with the setup, and demographic anomalies, our results are inconclusive. We describe our initial experimental design, lessons learned in our experimentation, and provide a set of steps for others to follow in implementing experiments using unfamiliar technologies, eye-tracking specifically, subjects with different experience with the laboratory tasks, as well as individuals with varying security expertise. We also discuss recruitment and how our design will address the inherent uncertainties in recruitment, as opposed to design for an ideal population. Some of these modifications are generalizable, together they will allow us to run a larger 2x2 study, rather than a study of only experts using two different single sign-on systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Online social networks can be modelled as graphs; in this paper, we analyze the use of graph metrics for identifying users with anomalous relationships to other users. A framework is proposed for analyzing the effectiveness of various graph theoretic properties such as the number of neighbouring nodes and edges, betweenness centrality, and community cohesiveness in detecting anomalous users. Experimental results on real-world data collected from online social networks show that the majority of users typically have friends who are friends themselves, whereas anomalous users’ graphs typically do not follow this common rule. Empirical analysis also shows that the relationship between average betweenness centrality and edges identifies anomalies more accurately than other approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Highly sensitive infrared (IR) cameras provide high-resolution diagnostic images of the temperature and vascular changes of breasts. These images can be processed to emphasize hot spots that exhibit early and subtle changes owing to pathology. The resulting images show clusters that appear random in shape and spatial distribution but carry class dependent information in shape and texture. Automated pattern recognition techniques are challenged because of changes in location, size and orientation of these clusters. Higher order spectral invariant features provide robustness to such transformations and are suited for texture and shape dependent information extraction from noisy images. In this work, the effectiveness of bispectral invariant features in diagnostic classification of breast thermal images into malignant, benign and normal classes is evaluated and a phase-only variant of these features is proposed. High resolution IR images of breasts, captured with measuring accuracy of ±0.4% (full scale) and temperature resolution of 0.1 °C black body, depicting malignant, benign and normal pathologies are used in this study. Breast images are registered using their lower boundaries, automatically extracted using landmark points whose locations are learned during training. Boundaries are extracted using Canny edge detection and elimination of inner edges. Breast images are then segmented using fuzzy c-means clustering and the hottest regions are selected for feature extraction. Bispectral invariant features are extracted from Radon projections of these images. An Adaboost classifier is used to select and fuse the best features during training and then classify unseen test images into malignant, benign and normal classes. A data set comprising 9 malignant, 12 benign and 11 normal cases is used for evaluation of performance. Malignant cases are detected with 95% accuracy. A variant of the features using the normalized bispectrum, which discards all magnitude information, is shown to perform better for classification between benign and normal cases, with 83% accuracy compared to 66% for the original.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Visual abnormalities, both at the sensory input and the higher interpretive levels, have been associated with many of the symptoms of schizophrenia. Individuals with schizophrenia typically experience distortions of sensory perception, resulting in perceptual hallucinations and delusions that are related to the observed visual deficits. Disorganised speech, thinking and behaviour are commonly experienced by sufferers of the disorder, and have also been attributed to perceptual disturbances associated with anomalies in visual processing. Compounding these issues are marked deficits in cognitive functioning that are observed in approximately 80% of those with schizophrenia. Cognitive impairments associated with schizophrenia include: difficulty with concentration and memory (i.e. working, visual and verbal), an impaired ability to process complex information, response inhibition and deficits in speed of processing, visual and verbal learning. Deficits in sustained attention or vigilance, poor executive functioning such as poor reasoning, problem solving, and social cognition, are all influenced by impaired visual processing. These symptoms impact on the internal perceptual world of those with schizophrenia, and hamper their ability to navigate their external environment. Visual processing abnormalities in schizophrenia are likely to worsen personal, social and occupational functioning. Binocular rivalry provides a unique opportunity to investigate the processes involved in visual awareness and visual perception. Binocular rivalry is the alternation of perceptual images that occurs when conflicting visual stimuli are presented to each eye in the same retinal location. The observer perceives the opposing images in an alternating fashion, despite the sensory input to each eye remaining constant. Binocular rivalry tasks have been developed to investigate specific parts of the visual system. The research presented in this Thesis provides an explorative investigation into binocular rivalry in schizophrenia, using the method of Pettigrew and Miller (1998) and comparing individuals with schizophrenia to healthy controls. This method allows manipulations to the spatial and temporal frequency, luminance contrast and chromaticity of the visual stimuli. Manipulations to the rival stimuli affect the rate of binocular rivalry alternations and the time spent perceiving each image (dominance duration). Binocular rivalry rate and dominance durations provide useful measures to investigate aspects of visual neural processing that lead to the perceptual disturbances and cognitive dysfunction attributed to schizophrenia. However, despite this promise the binocular rivalry phenomenon has not been extensively explored in schizophrenia to date. Following a review of the literature, the research in this Thesis examined individual variation in binocular rivalry. The initial study (Chapter 2) explored the effect of systematically altering the properties of the stimuli (i.e. spatial and temporal frequency, luminance contrast and chromaticity) on binocular rivalry rate and dominance durations in healthy individuals (n=20). The findings showed that altering the stimuli with respect to temporal frequency and luminance contrast significantly affected rate. This is significant as processing of temporal frequency and luminance contrast have consistently been demonstrated to be abnormal in schizophrenia. The current research then explored binocular rivalry in schizophrenia. The primary research question was, "Are binocular rivalry rates and dominance durations recorded in participants with schizophrenia different to those of the controls?" In this second study binocular rivalry data that were collected using low- and highstrength binocular rivalry were compared to alternations recorded during a monocular rivalry task, the Necker Cube task to replicate and advance the work of Miller et al., (2003). Participants with schizophrenia (n=20) recorded fewer alternations (i.e. slower alternation rates) than control participants (n=20) on both binocular rivalry tasks, however no difference was observed between the groups on the Necker cube task. Magnocellular and parvocellular visual pathways, thought to be abnormal in schizophrenia, were also investigated in binocular rivalry. The binocular rivalry stimuli used in this third study (Chapter 4) were altered to bias the task for one of these two pathways. Participants with schizophrenia recorded slower binocular rivalry rates than controls in both binocular rivalry tasks. Using a ‘within subject design’, binocular rivalry data were compared to data collected from a backwardmasking task widely accepted to bias both these pathways. Based on these data, a model of binocular rivalry, based on the magnocellular and parvocellular pathways that contribute to the dorsal and ventral visual streams, was developed. Binocular rivalry rates were compared with performance on the Benton’s Judgment of Line Orientation task, in individuals with schizophrenia compared to healthy controls (Chapter 5). The Benton’s Judgment of Line Orientation task is widely accepted to be processed within the right cerebral hemisphere, making it an appropriate task to investigate the role of the cerebral hemispheres in binocular rivalry, and to investigate the inter-hemispheric switching hypothesis of binocular rivalry proposed by Pettigrew and Miller (1998, 2003). The data were suggestive of intra-hemispheric rather than an inter-hemispheric visual processing in binocular rivalry. Neurotransmitter involvement in binocular rivalry, backward masking and Judgment of Line Orientation in schizophrenia were investigated using a genetic indicator of dopamine receptor distribution and functioning; the presence of the Taq1 allele of the dopamine D2 receptor (DRD2) receptor gene. This final study (Chapter 6) explored whether the presence of the Taq1 allele of the DRD2 receptor gene, and thus, by inference the distribution of dopamine receptors and dopamine function, accounted for the large individual variation in binocular rivalry. The presence of the Taq1 allele was associated with slower binocular rivalry rates or poorer performance in the backward masking and Judgment of Line Orientation tasks seen in the group with schizophrenia. This Thesis has contributed to what is known about binocular rivalry in schizophrenia. Consistently slower binocular rivalry rates were observed in participants with schizophrenia, indicating abnormally-slow visual processing in this group. These data support previous studies reporting visual processing abnormalities in schizophrenia and suggest that a slow binocular rivalry rate is not a feature specific to bipolar disorder, but may be a feature of disorders with psychotic features generally. The contributions of the magnocellular or dorsal pathways and parvocellular or ventral pathways to binocular rivalry, and therefore to perceptual awareness, were investigated. The data presented supported the view that the magnocellular system initiates perceptual awareness of an image and the parvocellular system maintains the perception of the image, making it available to higher level processing occurring within the cortical hemispheres. Abnormal magnocellular and parvocellular processing may both contribute to perceptual disturbances that ultimately contribute to the cognitive dysfunction associated with schizophrenia. An alternative model of binocular rivalry based on these observations was proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increases in functionality, power and intelligence of modern engineered systems led to complex systems with a large number of interconnected dynamic subsystems. In such machines, faults in one subsystem can cascade and affect the behavior of numerous other subsystems. This complicates the traditional fault monitoring procedures because of the need to train models of the faults that the monitoring system needs to detect and recognize. Unavoidable design defects, quality variations and different usage patterns make it infeasible to foresee all possible faults, resulting in limited diagnostic coverage that can only deal with previously anticipated and modeled failures. This leads to missed detections and costly blind swapping of acceptable components because of one’s inability to accurately isolate the source of previously unseen anomalies. To circumvent these difficulties, a new paradigm for diagnostic systems is proposed and discussed in this paper. Its feasibility is demonstrated through application examples in automotive engine diagnostics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The T-box family transcription factor gene TBX20 acts in a conserved regulatory network, guiding heart formation and patterning in diverse species. Mouse Tbx20 is expressed in cardiac progenitor cells, differentiating cardiomyocytes, and developing valvular tissue, and its deletion or RNA interference-mediated knockdown is catastrophic for heart development. TBX20 interacts physically, functionally, and genetically with other cardiac transcription factors, including NKX2-5, GATA4, and TBX5, mutations of which cause congenital heart disease (CHD). Here, we report nonsense (Q195X) and missense (I152M) germline mutations within the T-box DNA-binding domain of human TBX20 that were associated with a family history of CHD and a complex spectrum of developmental anomalies, including defects in septation, chamber growth, and valvulogenesis. Biophysical characterization of wild-type and mutant proteins indicated how the missense mutation disrupts the structure and function of the TBX20 T-box. Dilated cardiomyopathy was a feature of the TBX20 mutant phenotype in humans and mice, suggesting that mutations in developmental transcription factors can provide a sensitized template for adult-onset heart disease. Our findings are the first to link TBX20 mutations to human pathology. They provide insights into how mutation of different genes in an interactive regulatory circuit lead to diverse clinical phenotypes, with implications for diagnosis, genetic screening, and patient follow-up.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article argues that a semantic shift in the crowd in Vietnam over the last decade has allowed public space to become a site through which transgressive ideologies and desires may have an outlet. At a time of accelerating social change, the state has effectively delimited public criticism yet a fragile but assertive form of Vietnamese democratic practice has arisen in public space, at the margins of official society, in sites previously equated with state control. Official state functions attract only small audiences, and rather than celebrating the dominance of the party, reveal the disengagement of the populace in the party's activities. Where crowds were always a component of state (stage)-managed events, now public spaces are attracting large numbers of people for supposedly non-political activities which may become transgressive acts condemned by the regime. In support of the notion that crowding is an opening up of the possibility of more subversive political actions, the paper presents an analysis of recent crowd formations and the state's reaction to them. The analysis reveals the modalities through which popular culture has provided the public with the means to transcend the constraints of official, authorized, and legitimate codes of behaviour in public space. Changes in the use of public space, it is argued, map the sets of relations between the public and the state, making these transforming relationships visible, although fraught with contradictions and anomalies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose Videokeratoscopy images can be used for the non-invasive assessment of the tear film. In this work the applicability of an image processing technique, textural-analysis, for the assessment of the tear film in Placido disc images has been investigated. Methods In the presence of tear film thinning/break-up, the reflected pattern from the videokeratoscope is disturbed in the region of tear film disruption. Thus, the Placido pattern carries information about the stability of the underlying tear film. By characterizing the pattern regularity, the tear film quality can be inferred. In this paper, a textural features approach is used to process the Placido images. This method provides a set of texture features from which an estimate of the tear film quality can be obtained. The method is tested for the detection of dry eye in a retrospective dataset from 34 subjects (22-normal and 12-dry eye), with measurements taken under suppressed blinking conditions. Results To assess the capability of each texture-feature to discriminate dry eye from normal subjects, the receiver operating curve (ROC) was calculated and the area under the curve (AUC), specificity and sensitivity extracted. For the different features examined, the AUC value ranged from 0.77 to 0.82, while the sensitivity typically showed values above 0.9 and the specificity showed values around 0.6. Overall, the estimated ROCs indicate that the proposed technique provides good discrimination performance. Conclusions Texture analysis of videokeratoscopy images is applicable to study tear film anomalies in dry eye subjects. The proposed technique appears to have demonstrated its clinical relevance and utility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.