818 resultados para Linear matrix inequalities (LMI) techniques
Resumo:
Extracellular matrix (ECM) materials are widely used in cartilage tissue engineering. However, the current ECM materials are unsatisfactory for clinical practice as most of them are derived from allogenous or xenogenous tissue. This study was designed to develop a novel autologous ECM scaffold for cartilage tissue engineering. The autologous bone marrow mesenchymal stem cell-derived ECM (aBMSC-dECM) membrane was collected and fabricated into a three-dimensional porous scaffold via cross-linking and freeze-drying techniques. Articular chondrocytes were seeded into the aBMSC-dECM scaffold and atelocollagen scaffold, respectively. An in vitro culture and an in vivo implantation in nude mice model were performed to evaluate the influence on engineered cartilage. The current results showed that the aBMSC-dECM scaffold had a good microstructure and biocompatibility. After 4 weeks in vitro culture, the engineered cartilage in the aBMSC-dECM scaffold group formed thicker cartilage tissue with more homogeneous structure and higher expressions of cartilaginous gene and protein compared with the atelocollagen scaffold group. Furthermore, the engineered cartilage based on the aBMSC-dECM scaffold showed better cartilage formation in terms of volume and homogeneity, cartilage matrix content, and compressive modulus after 3 weeks in vivo implantation. These results indicated that the aBMSC-dECM scaffold could be a successful novel candidate scaffold for cartilage tissue engineering.
Resumo:
Objective To discuss generalized estimating equations as an extension of generalized linear models by commenting on the paper of Ziegler and Vens "Generalized Estimating Equations. Notes on the Choice of the Working Correlation Matrix". Methods Inviting an international group of experts to comment on this paper. Results Several perspectives have been taken by the discussants. Econometricians have established parallels to the generalized method of moments (GMM). Statisticians discussed model assumptions and the aspect of missing data Applied statisticians; commented on practical aspects in data analysis. Conclusions In general, careful modeling correlation is encouraged when considering estimation efficiency and other implications, and a comparison of choosing instruments in GMM and generalized estimating equations, (GEE) would be worthwhile. Some theoretical drawbacks of GEE need to be further addressed and require careful analysis of data This particularly applies to the situation when data are missing at random.
Resumo:
Analogue and digital techniques for linearization of non-linear input-output relationship of transducers are briefly reviewed. The condition required for linearizing a non-linear function y = f(x) using a non-linear analogue-to-digital converter, is explained. A simple technique to construct a non-linear digital-to-analogue converter, based on ' segments of equal digital interval ' is described. The technique was used to build an N-DAC which can be employed in a successive approximation or counter-ramp type ADC to linearize the non-linear transfer function of a thermistor-resistor combination. The possibility of achieving an order of magnitude higher accuracy in the measurement of temperature is shown.
Resumo:
In this paper we study two problems in feedback stabilization. The first is the simultaneous stabilization problem, which can be stated as follows. Given plantsG_{0}, G_{1},..., G_{l}, does there exist a single compensatorCthat stabilizes all of them? The second is that of stabilization by a stable compensator, or more generally, a "least unstable" compensator. Given a plantG, we would like to know whether or not there exists a stable compensatorCthat stabilizesG; if not, what is the smallest number of right half-place poles (counted according to their McMillan degree) that any stabilizing compensator must have? We show that the two problems are equivalent in the following sense. The problem of simultaneously stabilizingl + 1plants can be reduced to the problem of simultaneously stabilizinglplants using a stable compensator, which in turn can be stated as the following purely algebraic problem. Given2lmatricesA_{1}, ..., A_{l}, B_{1}, ..., B_{l}, whereA_{i}, B_{i}are right-coprime for alli, does there exist a matrixMsuch thatA_{i} + MB_{i}, is unimodular for alli?Conversely, the problem of simultaneously stabilizinglplants using a stable compensator can be formulated as one of simultaneously stabilizingl + 1plants. The problem of determining whether or not there exists anMsuch thatA + BMis unimodular, given a right-coprime pair (A, B), turns out to be a special case of a question concerning a matrix division algorithm in a proper Euclidean domain. We give an answer to this question, and we believe this result might be of some independent interest. We show that, given twon times mplantsG_{0} and G_{1}we can generically stabilize them simultaneously provided eithernormis greater than one. In contrast, simultaneous stabilizability, of two single-input-single-output plants, g0and g1, is not generic.
Resumo:
In a letter RauA proposed a new method for designing statefeedback controllers using eigenvalue sensitivity matrices. However, there appears to be a conceptual mistake in the procedure, or else it is unduly restricted in its applicability. In particular the equation — BR~lBTK = A/.I, in which K is a positive-definite symmetric matrix.
Latent TGF-β binding proteins -3 and -4 : transcriptional control and extracellular matrix targeting
Resumo:
Extracellular matrix (ECM) is a complex network of various proteins and proteoglycans which provides tissues with structural strength and resilience. By harvesting signaling molecules like growth factors ECM has the capacity to control cellular functions including proliferation, differentiation and cell survival. Latent transforming growth factor β (TGF-β) binding proteins (LTBPs) associate fibrillar structures of the ECM and mediate the efficient secretion and ECM deposition of latent TGF-β. The current work was conducted to determine the regulatory regions of LTBP-3 and -4 genes to gain insight into their tissue-specific expression which also has impact on TGF-β biology. Furthermore, the current research aimed at defining the ECM targeting of the N-terminal variants of LTBP-4 (LTBP-4S and -4L), which is required to understand their functions in tissues and to gain insight into conditions in which TGF-β is activated. To characterize the regulatory regions of LTBP-3 and -4 genes in silico and functional promoter analysis techniques were employed. It was found that the expression of LTBP-4S and -4L are under control of two independent promoters. This finding was in accordance with the observed expression patterns of LTBP-4S and -4L in human tissues. All promoter regions characterized in this study were TATAless, GC-rich and highly conserved between human and mouse species. Putative binding sites for Sp1 and GATA family of transcription factors were recognized in all of these regulatory regions. It is possible that these transcription factors control the basal expression of LTBP-3 and -4 genes. Smad binding element was found within the LTBP-3 and -4S promoter regions, but it was not present in LTBP-4L promoter. Although this element important for TGF-β signaling was present in LTBP-4S promoter, TGF-β did not induce its transcriptional activity. LTBP-3 promoter activity and mRNA expression instead were stimulated by TGF-β1 in osteosarcoma cells. It was found that the stimulatory effect of TGF-β was mediated by Smad and Erk MAPK signaling pathways. The current work explored the ECM targeting of LTBP-4S and identified binding partners of this protein. It was found that the N-terminal end of LTBP-4S possesses fibronectin (FN) binding sites which are critical for its ECM targeting. FN deficient fibroblasts incorporated LTBP-4S into their ECM only after addition of exogenous FN. Furthermore, LTBP-4S was found to have heparin binding regions, of which the C-terminal binding site mediated fibroblast adhesion. Soluble heparin prevented the ECM association of LTBP-4S in fibroblast cultures. In the current work it was observed that there are significant differences in the secretion, processing and ECM targeting of LTBP-4S and -4L. Interestingly, it was observed that most of the secreted LTBP-4L was associated with latent TGF-β1, whereas LTBP-4S was mainly secreted as a free form from CHO cells. This thesis provides information on transcriptional regulation of LTBP-3 and -4 genes, which is required for the deeper understanding of their tissue-specific functions. Further, the current work elucidates the structural variability of LTBPs, which appears to have impact on secretion and ECM targeting of TGF-β. These findings may advance understanding the abnormal activation of TGF-β which is associated with connective tissue disorders and cancer.
Resumo:
Miniaturized mass spectrometric ionization techniques for environmental analysis and bioanalysis Novel miniaturized mass spectrometric ionization techniques based on atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were studied and evaluated in the analysis of environmental samples and biosamples. The three analytical systems investigated here were gas chromatography-microchip atmospheric pressure chemical ionization-mass spectrometry (GC-µAPCI-MS) and gas chromatography-microchip atmospheric pressure photoionization-mass spectrometry (GC-µAPPI-MS), where sample pretreatment and chromatographic separation precede ionization, and desorption atmospheric pressure photoionization-mass spectrometry (DAPPI-MS), where the samples are analyzed either as such or after minimal pretreatment. The gas chromatography-microchip atmospheric pressure ionization-mass spectrometry (GC-µAPI-MS) instrumentations were used in the analysis of polychlorinated biphenyls (PCBs) in negative ion mode and 2-quinolinone-derived selective androgen receptor modulators (SARMs) in positive ion mode. The analytical characteristics (i.e., limits of detection, linear ranges, and repeatabilities) of the methods were evaluated with PCB standards and SARMs in urine. All methods showed good analytical characteristics and potential for quantitative environmental analysis or bioanalysis. Desorption and ionization mechanisms in DAPPI were studied. Desorption was found to be a thermal process, with the efficiency strongly depending on thermal conductivity of the sampling surface. Probably the size and polarity of the analyte also play a role. In positive ion mode, the ionization is dependent on the ionization energy and proton affinity of the analyte and the spray solvent, while in negative ion mode the ionization mechanism is determined by the electron affinity and gas-phase acidity of the analyte and the spray solvent. DAPPI-MS was tested in the fast screening analysis of environmental, food, and forensic samples, and the results demonstrated the feasibility of DAPPI-MS for rapid screening analysis of authentic samples.
Resumo:
Radioactive particles from three locations were investigated for elemental composition, oxidation states of matrix elements, and origin. Instrumental techniques applied to the task were scanning electron microscopy, X-ray and gamma-ray spectrometry, secondary ion mass spectrometry, and synchrotron radiation based microanalytical techniques comprising X-ray fluorescence spectrometry, X-ray fluorescence tomography, and X-ray absorption near-edge structure spectroscopy. Uranium-containing low activity particles collected from Irish Sea sediments were characterized in terms of composition and distribution of matrix elements and the oxidation states of uranium. Indications of the origin were obtained from the intensity ratios and the presence of thorium, uranium, and plutonium. Uranium in the particles was found to exist mostly as U(IV). Studies on plutonium particles from Runit Island (Marshall Islands) soil indicated that the samples were weapon fuel fragments originating from two separate detonations: a safety test and a low-yield test. The plutonium in the particles was found to be of similar age. The distribution and oxidation states of uranium and plutonium in the matrix of weapon fuel particles from Thule (Greenland) sediments were investigated. The variations in intensity ratios observed with different techniques indicated more than one origin. Uranium in particle matrixes was mostly U(IV), but plutonium existed in some particles mainly as Pu(IV), and in others mainly as oxidized Pu(VI). The results demonstrated that the various techniques were effectively applied in the characterization of environmental radioactive particles. An on-line method was developed for separating americium from environmental samples. The procedure utilizes extraction chromatography to separate americium from light lanthanides, and cation exchange to concentrate americium before the final separation in an ion chromatography column. The separated radiochemically pure americium fraction is measured by alpha spectrometry. The method was tested with certified sediment and soil samples and found to be applicable for the analysis of environmental samples containing a wide range of Am-241 activity. Proceeding from the on-line method developed for americium, a method was also developed for separating plutonium and americium. Plutonium is reduced to Pu(III), and separated together with Am(III) throughout the procedure. Pu(III) and Am(III) are eluted from the ion chromatography column as anionic dipicolinate and oxalate complexes, respectively, and measured by alpha spectrometry.
Resumo:
The transfer matrix method is known to be well suited for a complete analysis of a lumped as well as distributed element, one-dimensional, linear dynamical system with a marked chain topology. However, general subroutines of the type available for classical matrix methods are not available in the current literature on transfer matrix methods. In the present article, general expressions for various aspects of analysis-viz., natural frequency equation, modal vectors, forced response and filter performance—have been evaluated in terms of a single parameter, referred to as velocity ratio. Subprograms have been developed for use with the transfer matrix method for the evaluation of velocity ratio and related parameters. It is shown that a given system, branched or straight-through, can be completely analysed in terms of these basic subprograms, on a stored program digital computer. It is observed that the transfer matrix method with the velocity ratio approach has certain advantages over the existing general matrix methods in the analysis of one-dimensional systems.
Resumo:
In an earlier paper [1], it has been shown that velocity ratio, defined with reference to the analogous circuit, is a basic parameter in the complete analysis of a linear one-dimensional dynamical system. In this paper it is shown that the terms constituting velocity ratio can be readily determined by means of an algebraic algorithm developed from a heuristic study of the process of transfer matrix multiplication. The algorithm permits the set of most significant terms at a particular frequency of interest to be identified from a knowledge of the relative magnitudes of the impedances of the constituent elements of a proposed configuration. This feature makes the algorithm a potential tool in a first approach to a rational design of a complex dynamical filter. This algorithm is particularly suited for the desk analysis of a medium size system with lumped as well as distributed elements.
Resumo:
This paper presents a method of designing a minimax filter in the presence of large plant uncertainties and constraints on the mean squared values of the estimates. The minimax filtering problem is reformulated in the framework of a deterministic optimal control problem and the method of solution employed, invokes the matrix Minimum Principle. The constrained linear filter and its relation to singular control problems has been illustrated. For the class of problems considered here it is shown that the filter can he constrained separately after carrying out the mini maximization. Numorieal examples are presented to illustrate the results.
Resumo:
Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.
Resumo:
An iterative algorithm baaed on probabilistic estimation is described for obtaining the minimum-norm solution of a very large, consistent, linear system of equations AX = g where A is an (m times n) matrix with non-negative elements, x and g are respectively (n times 1) and (m times 1) vectors with positive components.
Resumo:
When a uniform flow of any nature is interrupted, the readjustment of the flow results in concentrations and rare-factions, so that the peak value of the flow parameter will be higher than that which an elementary computation would suggest. When stress flow in a structure is interrupted, there are stress concentrations. These are generally localized and often large, in relation to the values indicated by simple equilibrium calculations. With the advent of the industrial revolution, dynamic and repeated loading of materials had become commonplace in engine parts and fast moving vehicles of locomotion. This led to serious fatigue failures arising from stress concentrations. Also, many metal forming processes, fabrication techniques and weak-link type safety systems benefit substantially from the intelligent use or avoidance, as appropriate, of stress concentrations. As a result, in the last 80 years, the study and and evaluation of stress concentrations has been a primary objective in the study of solid mechanics. Exact mathematical analysis of stress concentrations in finite bodies presents considerable difficulty for all but a few problems of infinite fields, concentric annuli and the like, treated under the presumption of small deformation, linear elasticity. A whole series of techniques have been developed to deal with different classes of shapes and domains, causes and sources of concentration, material behaviour, phenomenological formulation, etc. These include real and complex functions, conformal mapping, transform techniques, integral equations, finite differences and relaxation, and, more recently, the finite element methods. With the advent of large high speed computers, development of finite element concepts and a good understanding of functional analysis, it is now, in principle, possible to obtain with economy satisfactory solutions to a whole range of concentration problems by intelligently combining theory and computer application. An example is the hybridization of continuum concepts with computer based finite element formulations. This new situation also makes possible a more direct approach to the problem of design which is the primary purpose of most engineering analyses. The trend would appear to be clear: the computer will shape the theory, analysis and design.
Resumo:
An error-free computational approach is employed for finding the integer solution to a system of linear equations, using finite-field arithmetic. This approach is also extended to find the optimum solution for linear inequalities such as those arising in interval linear programming probloms.