135 resultados para Minkowski sum


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Young drivers are at higher risk of crashes than other drivers when carrying passengers. Graduated Driver Licensing has demonstrated effectiveness in reducing fatalities however there is considerable potential for additional strategies to complement the approach. A survey with 276 young adults (aged 17-25 years, 64% females) was conducted to examine the potential and importance of strategies that are delivered via the Internet and potential strategies for passengers. Strategies delivered via the Internet represent opportunity for widespread dissemination and greater reach to young people at times convenient to them. The current study found some significant differences between males and females with regard to ways the Internet is used to obtain road safety information and the components valued in trusted road safety sites. There were also significant differences between males and females on the kinds of strategies used as passengers to promote driver safety and the context in which it occurred, with females tending to take more proactive strategies than males. In sum, young people see value in Internet delivery for passenger safety information (80% agreed/ strongly agreed) and more than 90% thought it was important to intervene while a passenger of a risky driver. Thus tailoring Internet road safety strategies to young people may differ for males and females however there is considerable potential for a passenger focus in strategies aimed at reducing young driver crashes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper discusses the content, origin and development of Tendering Theory as a theory of price determination. It demonstrates how tendering theory determines market prices and how it is different from game and decision theories, and that in the tendering process, with non-cooperative, simultaneous, single sealed bids with individual private valuations, extensive public information, a large number of bidders and a long sequence of tendering occasions, there develops a competitive equilibrium. The development of a competitive equilibrium means that the concept of the tender as the sum of a valuation and a strategy, which is at the core of tendering theory, cannot be supported and that there are serious empirical, theoretical and methodological inconsistencies in the theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anthropometric assessment is a simple, safe, and cost-efficient method to examine the health status of individu-als. The Japanese obesity classification based on the sum of two skin folds (Σ2SF) was proposed nearly 40 years ago therefore its applicability to Japanese living today is unknown. The current study aimed to determine Σ2SF cut-off values that correspond to percent body fat (%BF) and BMI values using two datasets from young Japa-nese adults (233 males and 139 females). Using regression analysis, Σ2SF and height-corrected Σ2SF (HtΣ2SF) values that correspond to %BF of 20, 25, and 30% for males and 30, 35, and 40% for females were determined. In addition, cut-off values of both Σ2SF and HtΣ2SF that correspond to BMI values of 23 kg/m2, 25 kg/m2 and 30 kg/m2 were determined. In comparison with the original Σ2SF values, the proposed values are smaller by about 10 mm at maximum. The proposed values show an improvement in sensitivity from about 25% to above 90% to identify individuals with ≥20% body fat in males and ≥30% body fat in females with high specificity of about 95% in both genders. The results indicate that the original Σ2SF cut-off values to screen obese individuals cannot be applied to young Japanese adults living today and modification is required. Application of the pro-posed values may assist screening in the clinical setting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes and evaluates the novel utility of network methods for understanding human interpersonal interactions within social neurobiological systems such as sports teams. We show how collective system networks are supported by the sum of interpersonal interactions that emerge from the activity of system agents (such as players in a sports team). To test this idea we trialled the methodology in analyses of intra-team collective behaviours in the team sport of water polo. We observed that the number of interactions between team members resulted in varied intra-team coordination patterns of play, differentiating between successful and unsuccessful performance outcomes. Future research on small-world networks methodologies needs to formalize measures of node connections in analyses of collective behaviours in sports teams, to verify whether a high frequency of interactions is needed between players in order to achieve competitive performance outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Electrocardiogram (ECG) is an important bio-signal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. The HRV signal can be used as a base signal to observe the heart's functioning. These signals are non-linear and non-stationary in nature. So, higher order spectral (HOS) analysis, which is more suitable for non-linear systems and is robust to noise, was used. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, we have extracted seven features from the heart rate signals using HOS and fed them to a support vector machine (SVM) for classification. Our performance evaluation protocol uses 330 subjects consisting of five different kinds of cardiac disease conditions. We demonstrate a sensitivity of 90% for the classifier with a specificity of 87.93%. Our system is ready to run on larger data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cell based therapies require cells capable of self renewal and differentiation, and a prerequisite is the ability to prepare an effective dose of ex vivo expanded cells for autologous transplants. The in vivo identification of a source of physiologically relevant cell types suitable for cell therapies is therefore an integral part of tissue engineering. Bone marrow is the most easily accessible source of mesenchymal stem cells (MSCs), and harbours two distinct populations of adult stem cells; namely hematopoietic stem cells (HSCs) and bone mesenchymal stem cells (BMSCs). Unlike HSCs, there are yet no rigorous criteria for characterizing BMSCs. Changing understanding about the pluripotency of BMSCs in recent studies has expanded their potential application; however, the underlying molecular pathways which impart the features distinctive to BMSCs remain elusive. Furthermore, the sparse in vivo distribution of these cells imposes a clear limitation to their in vitro study. Also, when BMSCs are cultured in vitro there is a loss of the in vivo microenvironment which results in a progressive decline in proliferation potential and multipotentiality. This is further exacerbated with increased passage number, characterized by the onset of senescence related changes. Accordingly, establishing protocols for generating large numbers of BMSCs without affecting their differentiation potential is necessary. The principal aims of this thesis were to identify potential molecular factors for characterizing BMSCs from osteoarthritic patients, and also to attempt to establish culture protocols favourable for generating large number of BMSCs, while at the same time retaining their proliferation and differentiation potential. Previously published studies concerning clonal cells have demonstrated that BMSCs are heterogeneous populations of cells at various stages of growth. Some cells are higher in the hierarchy and represent the progenitors, while other cells occupy a lower position in the hierarchy and are therefore more committed to a particular lineage. This feature of BMSCs was made evident by the work of Mareddy et al., which involved generating clonal populations of BMSCs from bone marrow of osteoarthritic patients, by a single cell clonal culture method. Proliferation potential and differentiation capabilities were used to group cells into fast growing and slow growing clones. The study presented here is a continuation of the work of Mareddy et al. and employed immunological and array based techniques to identify the primary molecular factors involved in regulating phenotypic characteristics exhibited by contrasting clonal populations. The subtractive immunization (SI) was used to generate novel antibodies against favourably expressed proteins in the fast growing clonal cell population. The difference between the clonal populations at the transcriptional level was determined using a Stem Cell RT2 Profiler TM PCR Array which focuses on stem cell pathway gene expression. Monoclonal antibodies (mAb) generated by SI were able to effectively highlight differentially expressed antigenic determinants, as was evident by Western blot analysis and confocal microscopy. Co-immunoprecipitation, followed by mass spectroscopy analysis, identified a favourably expressed protein as the cytoskeletal protein vimentin. The stem cell gene array highlighted genes that were highly upregulated in the fast growing clonal cell population. Based on their functions these genes were grouped into growth factors, cell fate determination and maintenance of embryonic and neural stem cell renewal. Furthermore, on a closer analysis it was established that the cytoskeletal protein vimentin and nine out of ten genes identified by gene array were associated with chondrogenesis or cartilage repair, consistent with the potential role played by BMSCs in defect repair and maintaining tissue homeostasis, by modulating the gene expression pattern to compensate for degenerated cartilage in osteoarthritic tissues. The gene array also presented transcripts for embryonic lineage markers such as FOXA2 and Sox2, both of which were significantly over expressed in fast growing clonal populations. A recent groundbreaking study by Yamanaka et al imparted embryonic stem cell (ESCs) -like characteristic to somatic cells in a process termed nuclear reprogramming, by the ectopic expression of the genes Sox2, cMyc and Oct4. The expression of embryonic lineage markers in adult stem cells may be a mechanism by which the favourable behaviour of fast growing clonal cells is determined and suggests a possible active phenomenon of spontaneous reprogramming in fast growing clonal cells. The expression pattern of these critical molecular markers could be indicative of the competence of BMSCs. For this reason, the expression pattern of Sox2, Oct4 and cMyc, at various passages in heterogeneous BMSCs population and tissue derived cells (osteoblasts and chondrocytes), was investigated by a real-time PCR and immunoflourescence staining. A strong nuclear staining was observed for Sox2, Oct4 and cMyc, which gradually weakened accompanied with cytoplasmic translocation after several passage. The mRNA and protein expression of Sox2, Oct4 and cMyc peaked at the third passage for osteoblasts, chondrocytes and third passage for BMSCs, and declined with each subsequent passage, indicating towards a possible mechanism of spontaneous reprogramming. This study proposes that the progressive decline in proliferation potential and multipotentiality associated with increased passaging of BMSCs in vitro might be a consequence of loss of these propluripotency factors. We therefore hypothesise that the expression of these master genes is not an intrinsic cell function, but rather an outcome of interaction of the cells with their microenvironment; this was evident by the fact that when removed from their in vivo microenvironment, BMSCs undergo a rapid loss of stemness after only a few passages. One of the most interesting aspects of this study was the integration of factors in the culture conditions, which to some extent, mimicked the in vivo microenvironmental niche of the BMSCs. A number of studies have successfully established that the cellular niche is not an inert tissue component but is of prime importance. The total sum of stimuli from the microenvironment underpins the complex interplay of regulatory mechanisms which control multiple functions in stem cells most importantly stem cell renewal. Therefore, well characterised factors which affect BMSCs characteristics, such as fibronectin (FN) coating, and morphogens such as FGF2 and BMP4, were incorporated into the cell culture conditions. The experimental set up was designed to provide insight into the expression pattern of the stem cell related transcription factors Sox2, cMyc and Oct4, in BMSCs with respect to passaging and changes in culture conditions. Induction of these pluripotency markers in somatic cells by retroviral transfection has been shown to confer pluripotency and an ESCs like state. Our study demonstrated that all treatments could transiently induce the expression of Sox2, cMyc and Oct4, and favourably affect the proliferation potential of BMSCs. The combined effect of these treatments was able to induce and retain the endogenous nuclear expression of stem cell transcription factors in BMSCs over an extended number of in vitro passages. Our results therefore suggest that the transient induction and manipulation of endogenous expression of transcription factors critical for stemness can be achieved by modulating the culture conditions; the benefit of which is to circumvent the need for genetic manipulations. In summary, this study has explored the role of BMSCs in the diseased state of osteoarthritis, by employing transcriptional profiling along with SI. In particular this study pioneered the use of primary cells for generating novel antibodies by SI. We established that somatic cells and BMSCs have a basal level of expression of pluripotency markers. Furthermore, our study indicates that intrinsic signalling mechanisms of BMSCs are intimately linked with extrinsic cues from the microenvironment and that these signals appear to be critical for retaining the expression of genes to maintain cell stemness in long term in vitro culture. This project provides a basis for developing an “artificial niche” required for reversion of commitment and maintenance of BMSC in their uncommitted homeostatic state.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.