304 resultados para Factor decomposition
Resumo:
The use of animal sera for the culture of therapeutically important cells impedes the clinical use of the cells. We sought to characterize the functional response of human mesenchymal stem cells (hMSCs) to specific proteins known to exist in bone tissue with a view to eliminating the requirement of animal sera. Insulin-like growth factor-I (IGF-I), via IGF binding protein-3 or -5 (IGFBP-3 or -5) and transforming growth factor-beta 1 (TGF-beta(1)) are known to associate with the extracellular matrix (ECM) protein vitronectin (VN) and elicit functional responses in a range of cell types in vitro. We found that specific combinations of VN, IGFBP-3 or -5, and IGF-I or TGF-beta(1) could stimulate initial functional responses in hMSCs and that IGF-I or TGF-beta(1) induced hMSC aggregation, but VN concentration modulated this effect. We speculated that the aggregation effect may be due to endogenous protease activity, although we found that neither IGF-I nor TGF-beta(1) affected the functional expression of matrix metalloprotease-2 or -9, two common proteases expressed by hMSCs. In summary, combinations of the ECM and growth factors described herein may form the basis of defined cell culture media supplements, although the effect of endogenous protease expression on the function of such proteins requires investigation.
Resumo:
In this thesis, a new technique has been developed for determining the composition of a collection of loads including induction motors. The application would be to provide a representation of the dynamic electrical load of Brisbane so that the ability of the power system to survive a given fault can be predicted. Most of the work on load modelling to date has been on post disturbance analysis, not on continuous on-line models for loads. The post disturbance methods are unsuitable for load modelling where the aim is to determine the control action or a safety margin for a specific disturbance. This thesis is based on on-line load models. Dr. Tania Parveen considers 10 induction motors with different power ratings, inertia and torque damping constants to validate the approach, and their composite models are developed with different percentage contributions for each motor. This thesis also shows how measurements of a composite load respond to normal power system variations and this information can be used to continuously decompose the load continuously and to characterize regarding the load into different sizes and amounts of motor loads.
Resumo:
Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.
Resumo:
This paper proposes the use of the Bayes Factor to replace the Bayesian Information Criterion (BIC) as a criterion for speaker clustering within a speaker diarization system. The BIC is one of the most popular decision criteria used in speaker diarization systems today. However, it will be shown in this paper that the BIC is only an approximation to the Bayes factor of marginal likelihoods of the data given each hypothesis. This paper uses the Bayes factor directly as a decision criterion for speaker clustering, thus removing the error introduced by the BIC approximation. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, leading to a 14.7% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.
Resumo:
The term structure of interest rates is often summarized using a handful of yield factors that capture shifts in the shape of the yield curve. In this paper, we develop a comprehensive model for volatility dynamics in the level, slope, and curvature of the yield curve that simultaneously includes level and GARCH effects along with regime shifts. We show that the level of the short rate is useful in modeling the volatility of the three yield factors and that there are significant GARCH effects present even after including a level effect. Further, we find that allowing for regime shifts in the factor volatilities dramatically improves the model’s fit and strengthens the level effect. We also show that a regime-switching model with level and GARCH effects provides the best out-of-sample forecasting performance of yield volatility. We argue that the auxiliary models often used to estimate term structure models with simulation-based estimation techniques should be consistent with the main features of the yield curve that are identified by our model.
Resumo:
The main contribution of this paper is decomposition/separation of the compositie induction motors load from measurement at a system bus. In power system transmission buses load is represented by static and dynamic loads. The induction motor is considered as the main dynamic loads and in the practice for major transmission buses there will be many and various induction motors contributing. Particularly at an industrial bus most of the load is dynamic types. Rather than traing to extract models of many machines this paper seeks to identify three groups of induction motors to represent the dynamic loads. Three groups of induction motors used to characterize the load. These are the small groups (4kw to 11kw), the medium groups (15kw to 180kw) and the large groups (above 630kw). At first these groups with different percentage contribution of each group is composite. After that from the composite models, each motor percentage contribution is decomposed by using the least square algorithms. In power system commercial and the residential buses static loads percentage is higher than the dynamic loads percentage. To apply this theory to other types of buses such as residential and commerical it is good practice to represent the total load as a combination of composite motor loads, constant impedence loads and constant power loads. To validate the theory, the 24hrs of Sydney West data is decomposed according to the three groups of motor models.
Resumo:
Bayer hydrotalcites prepared using the seawater neutralisation (SWN) process of Bayer liquors are characterised using X-ray diffraction and thermal analysis techniques. The Bayer hydrotalcites are synthesised at four different temperatures (0, 25, 55, 75 °C) to determine the effect on the thermal stability of the hydrotalcite structure, and to identify other precipitates that form at these temperatures. The interlayer distance increased with increasing synthesis temperature, up to 55 °C, and then decreased by 0.14 Å for Bayer hydrotalcites prepared at 75 °C. The three mineralogical phases identified in this investigation are; 1) Bayer hydrotalcite, 2), calcium carbonate species, and 3) hydromagnesite. The DTG curve can be separated into four decomposition steps; 1) the removal of adsorbed water and free interlayer water in hydrotalcite (30 – 230 °C), 2) the dehydroxylation of hydrotalcite and the decarbonation of hydrotalcite (250 – 400 °C), 3) the decarbonation of hydromagnesite (400 – 550 °C), and 4) the decarbonation of aragonite (550 – 650 °C).
Brain-derived neurotrophic factor (BDNF) gene : no major impact on antidepressant treatment response
Resumo:
The brain-derived neurotrophic factor (BDNF) has been suggested to play a pivotal role in the aetiology of affective disorders. In order to further clarify the impact of BDNF gene variation on major depression as well as antidepressant treatment response, association of three BDNF polymorphisms [rs7103411, Val66Met (rs6265) and rs7124442] with major depression and antidepressant treatment response was investigated in an overall sample of 268 German patients with major depression and 424 healthy controls. False discovery rate (FDR) was applied to control for multiple testing. Additionally, ten markers in BDNF were tested for association with citalopram outcome in the STAR*D sample. While BDNF was not associated with major depression as a categorical diagnosis, the BDNF rs7124442 TT genotype was significantly related to worse treatment outcome over 6 wk in major depression (p=0.01) particularly in anxious depression (p=0.003) in the German sample. However, BDNF rs7103411 and rs6265 similarly predicted worse treatment response over 6 wk in clinical subtypes of depression such as melancholic depression only (rs7103411: TT
Resumo:
Multipotent mesenchymal stem cells (MSCs), first identified in the bone marrow, have subsequently been found in many other tissues, including fat, cartilage, muscle, and bone. Adipose tissue has been identified as an alternative to bone marrow as a source for the isolation of MSCs, as it is neither limited in volume nor as invasive in the harvesting. This study compares the multipotentiality of bone marrow-derived mesenchymal stem cells (BMSCs) with that of adipose-derived mesenchymal stem cells (AMSCs) from 12 age- and sex-matched donors. Phenotypically, the cells are very similar, with only three surface markers, CD106, CD146, and HLA-ABC, differentially expressed in the BMSCs. Although colony-forming units-fibroblastic numbers in BMSCs were higher than in AMSCs, the expression of multiple stem cell-related genes, like that of fibroblast growth factor 2 (FGF2), the Wnt pathway effectors FRAT1 and frizzled 1, and other self-renewal markers, was greater in AMSCs. Furthermore, AMSCs displayed enhanced osteogenic and adipogenic potential, whereas BMSCs formed chondrocytes more readily than AMSCs. However, by removing the effects of proliferation from the experiment, AMSCs no longer out-performed BMSCs in their ability to undergo osteogenic and adipogenic differentiation. Inhibition of the FGF2/fibroblast growth factor receptor 1 signaling pathway demonstrated that FGF2 is required for the proliferation of both AMSCs and BMSCs, yet blocking FGF2 signaling had no direct effect on osteogenic differentiation. Disclosure of potential conflicts of interest is found at the end of this article.
Resumo:
Background: Topical administration of growth factors (GFs) has displayed some potential in wound healing, but variable efficacy, high doses and costs have hampered their implementation. Moreover, this approach ignores the fact that wound repair is driven by interactions between multiple GFs and extracellular matrix (ECM) proteins. The Problem: Deep dermal partial thickness burn (DDPTB) injuries are the most common burn presentation to pediatric hospitals and also represent the most difficult burn injury to manage clinically. DDPTB often repair with a hypertrophic scar. Wounds that close rapidly exhibit reduced scarring. Thus treatments that shorten the time taken to close DDTPB’s may coincidently reduce scarring. Basic/Clinical Science Advances: We have observed that multi-protein complexes comprised of IGF and IGF-binding proteins bound to the ECM protein vitronectin (VN) significantly enhance cellular functions relevant to wound repair in human skin keratinocytes. These responses require activation of both the IGF-1R and the VN-binding αv integrins. We have recently evaluated the wound healing potential of these GF:VN complexes in a porcine model of DDTPB injury. Clinical Care Relevance: This pilot study demonstrates that GF:VN complexes hold promise as a wound healing therapy. Enhanced healing responses were observed after treatment with nanogram doses of the GF:VN complexes in vitro and in vivo. Critically healing was achieved using substantially less GF than studies in which GFs alone have been used. Conclusion: These data suggest that coupling GFs to ECM proteins, such as VN, may ultimately prove to be an improved technique for the delivery of novel GF-based wound therapies.
Resumo:
Confirmatory factor analyses were conducted to evaluate the factorial validity of the Toronto Alexithymia Scale in an alcohol-dependent sample. Several factor models were examined, but all models were rejected given their poor fit. A revision of the TAS-20 in alcohol-dependent populations may be needed.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.