131 resultados para tau-leap methods
Resumo:
Combustion is a complex phenomena involving a multiplicity of variables. Some important variables measured in flame tests follow [1]. In order to characterize ignition, such related parameters as ignition time, ease of ignition, flash ignition temperature, and self-ignition temperature are measured. For studying the propagation of the flame, parameters such as distance burned or charred, area of flame spread, time of flame spread, burning rate, charred or melted area, and fire endurance are measured. Smoke characteristics are studied by determining such parameters as specific optical density, maximum specific optical density, time of occurrence of the densities, maximum rate of density increase, visual obscuration time, and smoke obscuration index. In addition to the above variables, there are a number of specific properties of the combustible system which could be measured. These are soot formation, toxicity of combustion gases, heat of combustion, dripping phenomena during the burning of thermoplastics, afterglow, flame intensity, fuel contribution, visual characteristics, limiting oxygen concentration (OI), products of pyrolysis and combustion, and so forth. A multitude of flammability tests measuring one or more of these properties have been developed [2]. Admittedly, no one small scale test is adequate to mimic or assess the performance of a plastic in a real fire situation. The conditions are much too complicated [3, 4]. Some conceptual problems associated with flammability testing of polymers have been reviewed [5, 6].
Resumo:
We compare two popular methods for estimating the power spectrum from short data windows, namely the adaptive multivariate autoregressive (AMVAR) method and the multitaper method. By analyzing a simulated signal (embedded in a background Ornstein-Uhlenbeck noise process) we demonstrate that the AMVAR method performs better at detecting short bursts of oscillations compared to the multitaper method. However, both methods are immune to jitter in the temporal location of the signal. We also show that coherence can still be detected in noisy bivariate time series data by the AMVAR method even if the individual power spectra fail to show any peaks. Finally, using data from two monkeys performing a visuomotor pattern discrimination task, we demonstrate that the AMVAR method is better able to determine the termination of the beta oscillations when compared to the multitaper method.
Resumo:
This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.
Resumo:
A 4-degree-of-freedom single-input system and a 3-degree-of-freedom multi-input system are solved by the Coates', modified Coates' and Chan-Mai flowgraph methods. It is concluded that the Chan-Mai flowgraph method is superior to other flowgraph methods in such cases.
Resumo:
The control of shapes of nanocrystals is crucial for using them as building blocks for various applications. In this paper, we present a critical overview of the issues involved in shape-controlled synthesis of nanostructures. In particular, we focus on the mechanisms by which anisotropic structures of high-symmetry materials (fcc crystals, for instance) could be realized. Such structures require a symmetry-breaking mechanism to be operative that typically leads to selection of one of the facets/directions for growth over all the other symmetry-equivalent crystallographic facets. We show how this selection could arise for the growth of one-dimensional structures leading to ultrafine metal nanowires and for the case of two-dimensional nanostructures where the layer-by-layer growth takes place at low driving forces leading to plate-shaped structures. We illustrate morphology diagrams to predict the formation of two-dimensional structures during wet chemical synthesis. We show the generality of the method by extending it to predict the growth of plate-shaped inorganics produced by a precipitation reaction. Finally, we present the growth of crystals under high driving forces that can lead to the formation of porous structures with large surface areas.
Resumo:
A new approach to Penrose's twistor algebra is given. It is based on the use of a generalised quaternion algebra for the translation of statements in projective five-space into equivalent statements in twistor (conformal spinor) space. The formalism leads toSO(4, 2)-covariant formulations of the Pauli-Kofink and Fierz relations among Dirac bilinears, and generalisations of these relations.
Resumo:
Further improvement in performance, to achieve near transparent quality LSF quantization, is shown to be possible by using a higher order two dimensional (2-D) prediction in the coefficient domain. The prediction is performed in a closed-loop manner so that the LSF reconstruction error is the same as the quantization error of the prediction residual. We show that an optimum 2-D predictor, exploiting both inter-frame and intra-frame correlations, performs better than existing predictive methods. Computationally efficient split vector quantization technique is used to implement the proposed 2-D prediction based method. We show further improvement in performance by using weighted Euclidean distance.
Resumo:
ALUMINIUM exposure has been shown to result in aggregation of microtubule-associated protein tau in vitro. In the light of recent observations that the native random structure of tau protein is maintained in its monomeric and dimeric states as well as in the paired helical filaments characteristic of Alzheimer's disease, it is likely that factors playing a causative role in neurofibrillary pathology would not drastically alter the native conformation of tau protein. We have studied the interaction of tau protein with aluminium using circular dichroism (CD) and 27(Al) NMR spectroscopy. The CD studies revealed a five-fold increase in the observed ellipticity of the tau-aluminium assembly. The increase in elipticity was not associated with a change in the general conformation of the protein and was most likely due to an aggregation of the tau protein induced by aluminium. Al-27 NMR spectroscopy confirmed the binding of aluminium to tau protein. Hyperphosphorylation of tau in Alzheimer's disease is known to be associated with defective microtubule assembly in this condition. Abnormally phosphorylated tau exists in a polymerized form in the paired helical filaments (PHF) which constitute the neurofibrillary tangles found in Alzheimer's disease. While it is hypothesized that its altered biophysical characteristics render abnormally phosphorylated tau resistant to proteolysis, causing the formation of stable deposits,the sequence of events resulting in the polymerization of tau are little understood, as are the additional factors or modifications required for tills process. Based on the results of our spectroscopic studies, a model for the sequence of events occurring in neurofibrillary pathology is proposed.
Resumo:
In this paper, non-linear programming techniques are applied to the problem of controlling the vibration pattern of a stretched string. First, the problem of finding the magnitudes of two control forces applied at two points l1 and l2 on the string to reduce the energy of vibration over the interval (l1, l2) relative to the energy outside the interval (l1, l2) is considered. For this problem the relative merits of various methods of non-linear programming are compared. The more complicated problem of finding the positions and magnitudes of two control forces to obtain the desired energy pattern is then solved by using the slack unconstrained minimization technique with the Fletcher-Powell search. In the discussion of the results it is shown that the position of the control force is very important in controlling the energy pattern of the string.
Resumo:
The novel multidomain organization in the multimeric Escherichia coli AHAS I (ilvBN) enzyme has been dissected to generate polypeptide fragments. These fragments when cloned, expressed and purified reassemble in the presence of cofactors to yield a catalytically competent enzyme. Structural characterization of AHAS has been impeded due to the fact that the holoenzyme is prone to dissociation leading to heterogeneity in samples. Our approach has enabled the structural characterization using high-resolution nuclear magnetic resonance methods. Near complete sequence specific NMR assignments for backbone H-N, N-15, C-13 alpha and C-13(beta) atoms of the FAD binding domain of ilvB have been obtained on samples isotopically enriched in H-2, C-13 and N-15. The secondary structure determined on the basis of observed C-13(alpha) secondary chemical shifts and sequential NOEs indicates that the secondary structure of the FAD binding domain of E. coli AHAS large Subunit (ilvB) is similar to the structure of this domain in the catalytic subunit of yeast AHAS. Protein-protein interactions involving the regulatory subunit (ilvN) and the domains of the catalytic subunit (ilvB) were studied using circular dichroic and isotope edited solution nuclear magnetic resonance spectroscopic methods. Observed changes in circular dichroic spectra indicate that the regulatory subunit (ilvN) interacts with ilvB alpha and ilvB beta domains of the catalytic subunit and not with the ilvB gamma domain. NMR chemical shift mapping methods show that ilvN binds close to the FAD binding site in ilvB beta and proximal to the intrasubunit ilvB alpha/ilvB beta domain interface. The implication of this interaction on the role of the regulatory subunit oil the activity of the holoenzyme is discussed. NMR studies of the regulatory domains show that these domains are structured in solution. Preliminary evidence for the interaction of ilvN with the metabolic end product of the pathway, viz., valine is also presented.
Resumo:
The prognosis of patients with glioblastoma, the most malignant adult glial brain tumor, remains poor in spite of advances in treatment procedures, including surgical resection, irradiation and chemotherapy.Genetic heterogeneity of glioblastoma warrants extensive studies in order to gain a thorough understanding of the biology of this tumor. While there have been several studies of global transcript profiling of glioma with the identification of gene signatures for diagnosis and disease management, translation into clinics is yet to happen. Serum biomarkers have the potential to revolutionize the process of cancer diagnosis, grading, prognostication and treatment response monitoring. Besides having the advantage that serum can be obtained through a less invasive procedure, it contains molecules at an extraordinary dynamic range of ten orders of magnitude in terms of their concentrations. While the conventional methods, such as 2DE, have been in use for many years, the ability to identify the proteins through mass spectrometry techniques such as MALDI-TOF led to an explosion of interest in proteomics. Relatively new high-throughput proteomics methods such as SELDI-TOF and protein microarrays are expected to hasten the process of serum biomarker discovery. This review will highlight the recent advances in the proteomics platform in discovering serum biomarkers and the current status of glioma serum markers. We aim to provide the principles and potential of the latest proteomic approaches and their applications in the biomarker discovery process. Besides providing a comprehensive list of available serum biomarkers of glioma, we will also propose how these markers will revolutionize the clinical management of glioma patients.
Resumo:
Transparent glasses of SrBi2B2O7 (SBBO) were fabricated via the conventional melt-quenching technique. The amorphous and the glassy nature of the as-quenched samples were, respectively, confirmed by X-ray powder diffraction (XRD) and differential scanning calorimetry (DSC). The glass transition (T (g)) and the crystallization parameters [crystallization activation energy (E (cr)) and Avrami exponent (n)] were evaluated under non-isothermal conditions using DSC. There was a close agreement between the activation energies for the crystallization process determined by Augis and Bennet and Kissinger methods. The variation of local activation energy [E (c)(x)] that was determined by Ozawa method, decreased with the fraction of crystallization (x). The Avrami exponent (n(x)) increased with the increase in fraction of crystallization (x) suggesting that there was a change over in the crystallization process from the surface to the bulk.
Resumo:
Partitional clustering algorithms, which partition the dataset into a pre-defined number of clusters, can be broadly classified into two types: algorithms which explicitly take the number of clusters as input and algorithms that take the expected size of a cluster as input. In this paper, we propose a variant of the k-means algorithm and prove that it is more efficient than standard k-means algorithms. An important contribution of this paper is the establishment of a relation between the number of clusters and the size of the clusters in a dataset through the analysis of our algorithm. We also demonstrate that the integration of this algorithm as a pre-processing step in classification algorithms reduces their running-time complexity.
Resumo:
Past studies that have compared LBB stable discontinuous- and continuous-pressure finite element formulations on a variety of problems have concluded that both methods yield Solutions of comparable accuracy, and that the choice of interpolation is dictated by which of the two is more efficient. In this work, we show that using discontinuous-pressure interpolations can yield inaccurate solutions at large times on a class of transient problems, while the continuous-pressure formulation yields solutions that are in good agreement with the analytical Solution.
Resumo:
Foliage density and leaf area index are important vegetation structure variables. They can be measured by several methods but few have been tested in tropical forests which have high structural heterogeneity. In this study, foliage density estimates by two indirect methods, the point quadrat and photographic methods, were compared with those obtained by direct leaf counts in the understorey of a wet evergreen forest in southern India. The point quadrat method has a tendency to overestimate, whereas the photographic method consistently and ignificantly underestimates foliage density. There was stratification within the understorey, with areas close to the ground having higher foliage densities.