119 resultados para Group theoretical based techniques
em Queensland University of Technology - ePrints Archive
Resumo:
The existing Collaborative Filtering (CF) technique that has been widely applied by e-commerce sites requires a large amount of ratings data to make meaningful recommendations. It is not directly applicable for recommending products that are not frequently purchased by users, such as cars and houses, as it is difficult to collect rating data for such products from the users. Many of the e-commerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user's query are retrieved and recommended to the user. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their online navigation behaviour. This paper proposes to integrate collaborative filtering and search-based techniques to provide personalized recommendations for infrequently purchased products. Two different techniques are proposed, namely CFRRobin and CFAg Query. Instead of using the target user's query to search for products as normal search based systems do, the CFRRobin technique uses the products in which the target user's neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAg Query technique uses the products that the user's neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAg Query perform better than the standard Collaborative Filtering (CF) and the Basic Search (BS) approaches, which are widely applied by the current e-commerce applications. The CFRRobin and CFAg Query approaches also outperform the e- isting query expansion (QE) technique that was proposed for recommending infrequently purchased products.
Resumo:
Complex flow datasets are often difficult to represent in detail using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows (i.e., complex dynamics and time-dependent). In this paper, we review two popular texture-based techniques and their application to flow datasets sourced from real research projects. The texture-based techniques investigated were Line Integral Convolution (LIC), and Image-Based Flow Visualisation (IBFV). We evaluated these techniques and in this paper report on their visualisation effectiveness (when compared with traditional techniques), their ease of implementation, and their computational overhead.
Resumo:
Detailed representations of complex flow datasets are often difficult to generate using traditional vector visualisation techniques such as arrow plots and streamlines. This is particularly true when the flow regime changes in time. Texture-based techniques, which are based on the advection of dense textures, are novel techniques for visualising such flows. We review two popular texture based techniques and their application to flow datasets sourced from active research projects. The techniques investigated were Line integral convolution (LIC) [1], and Image based flow visualisation (IBFV) [18]. We evaluated these and report on their effectiveness from a visualisation perspective. We also report on their ease of implementation and computational overheads.
Resumo:
Background An increasing body of evidence associates a high level of sitting time with poor health outcomes. The benefits of moderate to vigorous-intensity physical activities to various aspects of health are now well documented; however, individuals may engage in moderate-intensity physical activity for at least 30 minutes on five or more days of the week and still exhibit a high level of sitting time. This purpose of this study was to examine differences in total wellness among adults relative to high/low levels of sitting time combined with insufficient/sufficient physical activity (PA). The construct of total wellness incorporates a holistic approach to the body, mind and spirit components of life, an approach which may be more encompassing than some definitions of health. Methods Data were obtained from 226 adult respondents (27 ± 6 years), including 116 (51%) males and 110 (49%) females. Total PA and total sitting time were assessed with the International Physical Activity Questionnaire (IPAQ) (short-version). The Wellness Evaluation of Lifestyle Inventory was used to assess total wellness. An analysis of covariance (ANCOVA) was utilised to assess the effects of the sitting time/physical activity group on total wellness. A covariate was included to partial out the effects of age, sex and work status (student or employed). Cross-tabulations were used to show associations between the IPAQ derived high/low levels of sitting time with insufficient/sufficient PA and the three total wellness groups (i.e. high level of wellness, moderate wellness and wellness development needed). Results The majority of the participants were located in the high total sitting time and sufficient PA group. There were statistical differences among the IPAQ groups for total wellness [F (2,220) = 32.5 (p <0.001)]. A Chi-square test revealed a significant difference in the distribution of the IPAQ categories within the classification of wellness [χ2 (N = 226) = 54.5, p < .001]. One-hundred percent (100%) of participants who self-rated as high total sitting time/insufficient PA were found in the wellness development needed group. In contrast, 72% of participants who were located in the low total sitting time/sufficient PA group were situated in the moderate wellness group. Conclusion Many participants who meet the physical activity guidelines, in this sample, sit for longer periods of time than the median Australian sitting time. An understanding of the effects of the enhanced PA and reduced sitting time on total wellness can add to the development of public health initiatives. Keywords: IPAQ; The Wellness Evaluation of Lifestyle (WEL); Sedentary lifestyle
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.
Resumo:
Texture based techniques for visualisation of unsteady vector fields have been applied for the visualisation of a Finite volume model for variably saturated groundwater flow through porous media. This model has been developed by staff in the School of Mathematical Sciences QUT for the study of salt water intrusion into coastal aquifers. This presentation discusses the implementation and effectiveness of the IBFV algorithm in the context of visualisation of the groundwater simulation outputs.
Resumo:
Vector field visualisation is one of the classic sub-fields of scientific data visualisation. The need for effective visualisation of flow data arises in many scientific domains ranging from medical sciences to aerodynamics. Though there has been much research on the topic, the question of how to communicate flow information effectively in real, practical situations is still largely an unsolved problem. This is particularly true for complex 3D flows. In this presentation we give a brief introduction and background to vector field visualisation and comment on the effectiveness of the most common solutions. We will then give some examples of current development on texture-based techniques, and given practical examples of their use in CFD research and hydrodynamic applications.
Resumo:
Currently, recommender systems (RS) have been widely applied in many commercial e-commerce sites to help users deal with the information overload problem. Recommender systems provide personalized recommendations to users and, thus, help in making good decisions about which product to buy from the vast amount of product choices. Many of the current recommender systems are developed for simple and frequently purchased products like books and videos, by using collaborative-filtering and content-based approaches. These approaches are not directly applicable for recommending infrequently purchased products such as cars and houses as it is difficult to collect a large number of ratings data from users for such products. Many of the ecommerce sites for infrequently purchased products are still using basic search-based techniques whereby the products that match with the attributes given in the target user’s query are retrieved and recommended. However, search-based recommenders cannot provide personalized recommendations. For different users, the recommendations will be the same if they provide the same query regardless of any difference in their interest. In this article, a simple user profiling approach is proposed to generate user’s preferences to product attributes (i.e., user profiles) based on user product click stream data. The user profiles can be used to find similarminded users (i.e., neighbours) accurately. Two recommendation approaches are proposed, namely Round- Robin fusion algorithm (CFRRobin) and Collaborative Filtering-based Aggregated Query algorithm (CFAgQuery), to generate personalized recommendations based on the user profiles. Instead of using the target user’s query to search for products as normal search based systems do, the CFRRobin technique uses the attributes of the products in which the target user’s neighbours have shown interest as queries to retrieve relevant products, and then recommends to the target user a list of products by merging and ranking the returned products using the Round Robin method. The CFAgQuery technique uses the attributes of the products that the user’s neighbours have shown interest in to derive an aggregated query, which is then used to retrieve products to recommend to the target user. Experiments conducted on a real e-commerce dataset show that both the proposed techniques CFRRobin and CFAgQuery perform better than the standard Collaborative Filtering and the Basic Search approaches, which are widely applied by the current e-commerce applications.
Resumo:
Detect and Avoid (DAA) technology is widely acknowledged as a critical enabler for unsegregated Remote Piloted Aircraft (RPA) operations, particularly Beyond Visual Line of Sight (BVLOS). Image-based DAA, in the visible spectrum, is a promising technological option for addressing the challenges DAA presents. Two impediments to progress for this approach are the scarcity of available video footage to train and test algorithms, in conjunction with testing regimes and specifications which facilitate repeatable, statistically valid, performance assessment. This paper includes three key contributions undertaken to address these impediments. In the first instance, we detail our progress towards the creation of a large hybrid collision and near-collision encounter database. Second, we explore the suitability of techniques employed by the biometric research community (Speaker Verification and Language Identification), for DAA performance optimisation and assessment. These techniques include Detection Error Trade-off (DET) curves, Equal Error Rates (EER), and the Detection Cost Function (DCF). Finally, the hybrid database and the speech-based techniques are combined and employed in the assessment of a contemporary, image based DAA system. This system includes stabilisation, morphological filtering and a Hidden Markov Model (HMM) temporal filter.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
Project focused group work is significant in developing social and personal skills as well as extending the ability to identify, formulate and solve engineering problems. As a result of increasing undergraduate class sizes, along with the requirement for many students to work part-time, group projects, peer and collaborative learning are seen as a fundamental part of engineering education. Group formation, connection to learning objectives and fairness of assessment has been widely reported as major issues that leave students dissatisfied with group project based units. Several strategies were trialled including a study of formation of groups by different methods across two engineering disciplines over the past 2 years. Other strategies involved a more structured approach to assessment practices of civil and electrical engineering disciplines design units. A confidential online teamwork management tool was used to collect and collate student self and peer assessment ratings and used for both formative feedback as well as assessment purposes. Student satisfaction and overall academic results in these subjects have improved since the introduction of these interventions. Both student and staff feedback highlight this approach as enhancing student engagement and satisfaction, improved student understanding of group roles, reducing number of dysfunctional groups whilst requiring less commitment of academic resources.
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Cell based therapies require cells capable of self renewal and differentiation, and a prerequisite is the ability to prepare an effective dose of ex vivo expanded cells for autologous transplants. The in vivo identification of a source of physiologically relevant cell types suitable for cell therapies is therefore an integral part of tissue engineering. Bone marrow is the most easily accessible source of mesenchymal stem cells (MSCs), and harbours two distinct populations of adult stem cells; namely hematopoietic stem cells (HSCs) and bone mesenchymal stem cells (BMSCs). Unlike HSCs, there are yet no rigorous criteria for characterizing BMSCs. Changing understanding about the pluripotency of BMSCs in recent studies has expanded their potential application; however, the underlying molecular pathways which impart the features distinctive to BMSCs remain elusive. Furthermore, the sparse in vivo distribution of these cells imposes a clear limitation to their in vitro study. Also, when BMSCs are cultured in vitro there is a loss of the in vivo microenvironment which results in a progressive decline in proliferation potential and multipotentiality. This is further exacerbated with increased passage number, characterized by the onset of senescence related changes. Accordingly, establishing protocols for generating large numbers of BMSCs without affecting their differentiation potential is necessary. The principal aims of this thesis were to identify potential molecular factors for characterizing BMSCs from osteoarthritic patients, and also to attempt to establish culture protocols favourable for generating large number of BMSCs, while at the same time retaining their proliferation and differentiation potential. Previously published studies concerning clonal cells have demonstrated that BMSCs are heterogeneous populations of cells at various stages of growth. Some cells are higher in the hierarchy and represent the progenitors, while other cells occupy a lower position in the hierarchy and are therefore more committed to a particular lineage. This feature of BMSCs was made evident by the work of Mareddy et al., which involved generating clonal populations of BMSCs from bone marrow of osteoarthritic patients, by a single cell clonal culture method. Proliferation potential and differentiation capabilities were used to group cells into fast growing and slow growing clones. The study presented here is a continuation of the work of Mareddy et al. and employed immunological and array based techniques to identify the primary molecular factors involved in regulating phenotypic characteristics exhibited by contrasting clonal populations. The subtractive immunization (SI) was used to generate novel antibodies against favourably expressed proteins in the fast growing clonal cell population. The difference between the clonal populations at the transcriptional level was determined using a Stem Cell RT2 Profiler TM PCR Array which focuses on stem cell pathway gene expression. Monoclonal antibodies (mAb) generated by SI were able to effectively highlight differentially expressed antigenic determinants, as was evident by Western blot analysis and confocal microscopy. Co-immunoprecipitation, followed by mass spectroscopy analysis, identified a favourably expressed protein as the cytoskeletal protein vimentin. The stem cell gene array highlighted genes that were highly upregulated in the fast growing clonal cell population. Based on their functions these genes were grouped into growth factors, cell fate determination and maintenance of embryonic and neural stem cell renewal. Furthermore, on a closer analysis it was established that the cytoskeletal protein vimentin and nine out of ten genes identified by gene array were associated with chondrogenesis or cartilage repair, consistent with the potential role played by BMSCs in defect repair and maintaining tissue homeostasis, by modulating the gene expression pattern to compensate for degenerated cartilage in osteoarthritic tissues. The gene array also presented transcripts for embryonic lineage markers such as FOXA2 and Sox2, both of which were significantly over expressed in fast growing clonal populations. A recent groundbreaking study by Yamanaka et al imparted embryonic stem cell (ESCs) -like characteristic to somatic cells in a process termed nuclear reprogramming, by the ectopic expression of the genes Sox2, cMyc and Oct4. The expression of embryonic lineage markers in adult stem cells may be a mechanism by which the favourable behaviour of fast growing clonal cells is determined and suggests a possible active phenomenon of spontaneous reprogramming in fast growing clonal cells. The expression pattern of these critical molecular markers could be indicative of the competence of BMSCs. For this reason, the expression pattern of Sox2, Oct4 and cMyc, at various passages in heterogeneous BMSCs population and tissue derived cells (osteoblasts and chondrocytes), was investigated by a real-time PCR and immunoflourescence staining. A strong nuclear staining was observed for Sox2, Oct4 and cMyc, which gradually weakened accompanied with cytoplasmic translocation after several passage. The mRNA and protein expression of Sox2, Oct4 and cMyc peaked at the third passage for osteoblasts, chondrocytes and third passage for BMSCs, and declined with each subsequent passage, indicating towards a possible mechanism of spontaneous reprogramming. This study proposes that the progressive decline in proliferation potential and multipotentiality associated with increased passaging of BMSCs in vitro might be a consequence of loss of these propluripotency factors. We therefore hypothesise that the expression of these master genes is not an intrinsic cell function, but rather an outcome of interaction of the cells with their microenvironment; this was evident by the fact that when removed from their in vivo microenvironment, BMSCs undergo a rapid loss of stemness after only a few passages. One of the most interesting aspects of this study was the integration of factors in the culture conditions, which to some extent, mimicked the in vivo microenvironmental niche of the BMSCs. A number of studies have successfully established that the cellular niche is not an inert tissue component but is of prime importance. The total sum of stimuli from the microenvironment underpins the complex interplay of regulatory mechanisms which control multiple functions in stem cells most importantly stem cell renewal. Therefore, well characterised factors which affect BMSCs characteristics, such as fibronectin (FN) coating, and morphogens such as FGF2 and BMP4, were incorporated into the cell culture conditions. The experimental set up was designed to provide insight into the expression pattern of the stem cell related transcription factors Sox2, cMyc and Oct4, in BMSCs with respect to passaging and changes in culture conditions. Induction of these pluripotency markers in somatic cells by retroviral transfection has been shown to confer pluripotency and an ESCs like state. Our study demonstrated that all treatments could transiently induce the expression of Sox2, cMyc and Oct4, and favourably affect the proliferation potential of BMSCs. The combined effect of these treatments was able to induce and retain the endogenous nuclear expression of stem cell transcription factors in BMSCs over an extended number of in vitro passages. Our results therefore suggest that the transient induction and manipulation of endogenous expression of transcription factors critical for stemness can be achieved by modulating the culture conditions; the benefit of which is to circumvent the need for genetic manipulations. In summary, this study has explored the role of BMSCs in the diseased state of osteoarthritis, by employing transcriptional profiling along with SI. In particular this study pioneered the use of primary cells for generating novel antibodies by SI. We established that somatic cells and BMSCs have a basal level of expression of pluripotency markers. Furthermore, our study indicates that intrinsic signalling mechanisms of BMSCs are intimately linked with extrinsic cues from the microenvironment and that these signals appear to be critical for retaining the expression of genes to maintain cell stemness in long term in vitro culture. This project provides a basis for developing an “artificial niche” required for reversion of commitment and maintenance of BMSC in their uncommitted homeostatic state.