956 resultados para T-matrix method
Resumo:
A maximum likelihood estimator based on the coalescent for unequal migration rates and different subpopulation sizes is developed. The method uses a Markov chain Monte Carlo approach to investigate possible genealogies with branch lengths and with migration events. Properties of the new method are shown by using simulated data from a four-population n-island model and a source–sink population model. Our estimation method as coded in migrate is tested against genetree; both programs deliver a very similar likelihood surface. The algorithm converges to the estimates fairly quickly, even when the Markov chain is started from unfavorable parameters. The method was used to estimate gene flow in the Nile valley by using mtDNA data from three human populations.
Resumo:
Matrix-assisted laser desorption/ionization (MALDI) time of flight mass spectrometry was used to detect and order DNA fragments generated by Sanger dideoxy cycle sequencing. This was accomplished by improving the sensitivity and resolution of the MALDI method using a delayed ion extraction technique (DE-MALDI). The cycle sequencing chemistry was optimized to produce as much as 100 fmol of each specific dideoxy terminated fragment, generated from extension of a 13-base primer annealed on 40- and 50-base templates. Analysis of the resultant sequencing mixture by DE-MALDI identified the appropriate termination products. The technique provides a new non-gel-based method to sequence DNA which may ultimately have considerable speed advantages over traditional methodologies.
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
This paper proposes a new feature representation method based on the construction of a Confidence Matrix (CM). This representation consists of posterior probability values provided by several weak classifiers, each one trained and used in different sets of features from the original sample. The CM allows the final classifier to abstract itself from discovering underlying groups of features. In this work the CM is applied to isolated character image recognition, for which several set of features can be extracted from each sample. Experimentation has shown that the use of CM permits a significant improvement in accuracy in most cases, while the others remain the same. The results were obtained after experimenting with four well-known corpora, using evolved meta-classifiers with the k-Nearest Neighbor rule as a weak classifier and by applying statistical significance tests.
Resumo:
Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.
Resumo:
Multicellular tumor spheroids (MCTS) are used as organotypic models of normal and solid tumor tissue. Traditional techniques for generating MCTS, such as growth on nonadherent surfaces, in suspension, or on scaffolds, have a number of drawbacks, including the need for manual selection to achieve a homogeneous population and the use of nonphysiological matrix compounds. In this study we describe a mild method for the generation of MCTS, in which individual spheroids form in hanging drops suspended from a microtiter plate. The method has been successfully applied to a broad range of cell lines and shows nearly 100% efficiency (i.e., one spheroid per drop). Using the hepatoma cell line, HepG2, the hanging drop method generated well-rounded MCTS with a narrow size distribution (coefficient of variation [CV] 10% to 15%, compared with 40% to 60% for growth on nonadherent surfaces). Structural analysis of HepG2 and a mammary gland adenocarcinoma cell line, MCF-7, composed spheroids, revealed highly organized, three-dimensional, tissue-like structures with an extensive extracellular matrix. The hanging drop method represents an attractive alternative for MCTS production, because it is mild, can be applied to a wide variety of cell lines, and can produce spheroids of a homogeneous size without the need for sieving or manual selection. The method has applications for basic studies of physiology and metabolism, tumor biology, toxicology, cellular organization, and the development of bioartificial tissue. (C) 2003 Wiley Periodicals, Inc.
Resumo:
Pseudo-ternary phase diagrams of the polar lipids Quil A, cholesterol (Chol) and phosphatidylcholine (PC) in aqueous mixtures prepared by the lipid film hydration method (where dried lipid film of phospholipids and cholesterol are hydrated by an aqueous solution of Quil A) were investigated in terms of the types of particulate structures formed therein. Negative staining transmission electron microscopy and polarized light microscopy were used to characterize the colloidal and coarse dispersed particles present in the systems. Pseudo-ternary phase diagrams were established for lipid mixtures hydrated in water and in Tris buffer (pH 7.4). The effect of equilibration time was also studied with respect to systems hydrated in water where the samples were stored for 2 months at 4degreesC. Depending on the mass ratio of Quil A, Chol and PC in the systems, various colloidal particles including ISCOM matrices, liposomes, ring-like micelles and worm-like micelles were observed. Other colloidal particles were also observed as minor structures in the presence of these predominant colloids including helices, layered structures and lamellae (hexagonal pattern of ring-like micelles). In terms of the conditions which appeared to promote the formation of ISCOM matrices, the area of the phase diagrams associated with systems containing these structures increased in the order: hydrated in water/short equilibration period < hydrated in buffer/short equilibration period < hydrated in water/prolonged equilibration period. ISCOM matrices appeared to form over time from samples, which initially contained a high concentration of ring-like micelles suggesting that these colloidal structures may be precursors to ISCOM matrix formation. Helices were also frequently found in samples containing ISCOM matrices as a minor colloidal structure. Equilibration time and presence of buffer salts also promoted the formation of liposomes in systems not containing Quil A. These parameters however, did not appear to significantly affect the occurrence and predominance of other structures present in the pseudo-binary systems containing Quil A. Pseudo-ternary phase diagrams of PC, Chol and Quil A are important to identify combinations which will produce different colloidal structures, particularly ISCOM matrices, by the method of lipid film hydration. Colloidal structures comprising these three components are readily prepared by hydration of dried lipid films and may have application in vaccine delivery where the functionality of ISCOMs has clearly been demonstrated. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A rapid method has been developed for the quantification of the prototypic cyclotide kalata B I in water and plasma utilizing matrix-assisted laser desorption ionisation time-of-flight (MALDI-TOF) mass spectrometry. The unusual structure of the cyclotides means that they do not ionise as readily as linear peptides and as a result of their low ionisation efficiency, traditional LC/MS analyses were not able to reach the levels of detection required for the quantification of cyclotides in plasma for pharmacokinetic studies. MALDI-TOF-MS analysis showed linearity (R-2 > 0.99) in the concentration range 0.05-10 mu g/mL with a limit of detection of 0.05 mu g/mL (9 fmol) in plasma. This paper highlights the applicability of MALDI-TOF mass spectrometry for the rapid and sensitive quantification of peptides in biological samples without the need for extensive extraction procedures. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this study was to systematically investigate the effect of lipid chain length and number of lipid chains present on lipopeptides on their ability to be incorporated within liposomes. The peptide KAVYNFATM was synthesized and conjugated to lipoamino acids having acyl chain lengths of C-8, C-12 and C-16. The C-12 construct was also prepared in the monomeric, dimeric and trimeric form. Liposomes were prepared by two techniques: hydration of dried lipid films (Bangham method) and hydration of freeze-dried monophase systems. Encapsulation of lipopeptide within liposomes prepared by hydration of dried lipid films was incomplete in all cases ranging from an entrapment efficiency of 70% for monomeric lipoamino acids at a 5% (w/w) loading to less than 20% for di- and trimeric forms at loadings of 20% (w/w). The incomplete entrapment of lipopeptides within liposomes appeared to be a result of the different solubilities of the lipopeptide and the phospholipids in the solvent used for the preparation of the lipid film. In contrast, encapsulation of lipopeptide within liposomes prepared by hydration of freeze-dried monophase systems was high, even up to a loading of 20% (w/w) and was much less affected by the acyl chain length and number than when liposomes were prepared by hydration of dried lipid films. Freeze drying of monophase systems is better at maintaining a molecular dispersion of the lipopeptide within the solid phospholipid matrix compared to preparation of lipid film by evaporation, particularly if the solubility of the lipopeptide in solvents is markedly different from that of the polar lipids used for liposome preparation. Consequently, upon hydration, the lipopeptide is more efficiently intercalated within the phospholipid bilayers. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
High-performance liquid chromatography coupled by an electrospray ion source to a tandem mass spectrometer (HPLC-EST-MS/ MS) is the current analytical method of choice for quantitation of analytes in biological matrices. With HPLC-ESI-MS/MS having the characteristics of high selectivity, sensitivity, and throughput, this technology is being increasingly used in the clinical laboratory. An important issue to be addressed in method development, validation, and routine use of HPLC-ESI-MS/MS is matrix effects. Matrix effects are the alteration of ionization efficiency by the presence of coeluting substances. These effects are unseen in the chromatograrn but have deleterious impact on methods accuracy and sensitivity. The two common ways to assess matrix effects are either by the postextraction addition method or the postcolumn infusion method. To remove or minimize matrix effects, modification to the sample extraction methodology and improved chromatographic separation must be performed. These two parameters are linked together and form the basis of developing a successful and robust quantitative HPLC-EST-MS/MS method. Due to the heterogenous nature of the population being studied, the variability of a method must be assessed in samples taken from a variety of subjects. In this paper, the major aspects of matrix effects are discussed with an approach to address matrix effects during method validation proposed. (c) 2004 The Canadian Society of Clinical Chemists. All rights reserved.
Resumo:
Determining the dimensionality of G provides an important perspective on the genetic basis of a multivariate suite of traits. Since the introduction of Fisher's geometric model, the number of genetically independent traits underlying a set of functionally related phenotypic traits has been recognized as an important factor influencing the response to selection. Here, we show how the effective dimensionality of G can be established, using a method for the determination of the dimensionality of the effect space from a multivariate general linear model introduced by AMEMIYA (1985). We compare this approach with two other available methods, factor-analytic modeling and bootstrapping, using a half-sib experiment that estimated G for eight cuticular hydrocarbons of Drosophila serrata. In our example, eight pheromone traits were shown to be adequately represented by only two underlying genetic dimensions by Amemiya's approach and factor-analytic modeling of the covariance structure at the sire level. In, contrast, bootstrapping identified four dimensions with significant genetic variance. A simulation study indicated that while the performance of Amemiya's method was more sensitive to power constraints, it performed as well or better than factor-analytic modeling in correctly identifying the original genetic dimensions at moderate to high levels of heritability. The bootstrap approach consistently overestimated the number of dimensions in all cases and performed less well than Amemiya's method at subspace recovery.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.
Resumo:
We analyse the matrix momentum algorithm, which provides an efficient approximation to on-line Newton's method, by extending a recent statistical mechanics framework to include second order algorithms. We study the efficacy of this method when the Hessian is available and also consider a practical implementation which uses a single example estimate of the Hessian. The method is shown to provide excellent asymptotic performance, although the single example implementation is sensitive to the choice of training parameters. We conjecture that matrix momentum could provide efficient matrix inversion for other second order algorithms.
Resumo:
Natural gradient learning is an efficient and principled method for improving on-line learning. In practical applications there will be an increased cost required in estimating and inverting the Fisher information matrix. We propose to use the matrix momentum algorithm in order to carry out efficient inversion and study the efficacy of a single step estimation of the Fisher information matrix. We analyse the proposed algorithm in a two-layer network, using a statistical mechanics framework which allows us to describe analytically the learning dynamics, and compare performance with true natural gradient learning and standard gradient descent.