977 resultados para Linear multistep methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article a two-dimensional transient boundary element formulation based on the mass matrix approach is discussed. The implicit formulation of the method to deal with elastoplastic analysis is considered, as well as the way to deal with viscous damping effects. The time integration processes are based on the Newmark rhoand Houbolt methods, while the domain integrals for mass, elastoplastic and damping effects are carried out by the well known cell approximation technique. The boundary element algebraic relations are also coupled with finite element frame relations to solve stiffened domains. Some examples to illustrate the accuracy and efficiency of the proposed formulation are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A linear prediction procedure is one of the approved numerical methods of signal processing. In the field of optical spectroscopy it is used mainly for extrapolation known parts of an optical signal in order to obtain a longer one or deduce missing signal samples. The first is needed particularly when narrowing spectral lines for the purpose of spectral information extraction. In the present paper the coherent anti-Stokes Raman scattering (CARS) spectra were under investigation. The spectra were significantly distorted by the presence of nonlinear nonresonant background. In addition, line shapes were far from Gaussian/Lorentz profiles. To overcome these disadvantages the maximum entropy method (MEM) for phase spectrum retrieval was used. The obtained broad MEM spectra were further underwent the linear prediction analysis in order to be narrowed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Baroreflex sensitivity was studied in the same group of conscious rats using vasoactive drugs (phenylephrine and sodium nitroprusside) administered by three different approaches: 1) bolus injection, 2) steady-state (blood pressure (BP) changes produced in steps), 3) ramp infusion (30 s, brief infusion). The heart rate (HR) responses were evaluated by the mean index (mean ratio of all HR changes and mean arterial pressure (MAP) changes), by linear regression and by the logistic method (maximum gain of the sigmoid curve by a logistic function). The experiments were performed on three consecutive days. Basal MAP and resting HR were similar on all days of the study. Bradycardic responses evaluated by the mean index (-1.5 ± 0.2, -2.1 ± 0.2 and -1.6 ± 0.2 bpm/mmHg) and linear regression (-1.8 ± 0.3, -1.4 ± 0.3 and -1.7 ± 0.2 bpm/mmHg) were similar for all three approaches used to change blood pressure. The tachycardic responses to decreases of MAP were similar when evaluated by linear regression (-3.9 ± 0.8, -2.1 ± 0.7 and -3.8 ± 0.4 bpm/mmHg). However, the tachycardic mean index (-3.1 ± 0.4, -6.6 ± 1 and -3.6 ± 0.5 bpm/mmHg) was higher when assessed by the steady-state method. The average gain evaluated by logistic function (-3.5 ± 0.6, -7.6 ± 1.3 and -3.8 ± 0.4 bpm/mmHg) was similar to the reflex tachycardic values, but different from the bradycardic values. Since different ways to change BP may alter the afferent baroreceptor function, the MAP changes obtained during short periods of time (up to 30 s: bolus and ramp infusion) are more appropriate to prevent the acute resetting. Assessment of the baroreflex sensitivity by mean index and linear regression permits a separate analysis of gain for reflex bradycardia and reflex tachycardia. Although two values of baroreflex sensitivity cannot be evaluated by a single symmetric logistic function, this method has the advantage of better comparing the baroreflex sensitivity of animals with different basal blood pressures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods are used to estimate anaerobic threshold (AT) during exercise. The aim of the present study was to compare AT obtained by a graphic visual method for the estimate of ventilatory and metabolic variables (gold standard), to a bi-segmental linear regression mathematical model of Hinkley's algorithm applied to heart rate (HR) and carbon dioxide output (VCO2) data. Thirteen young (24 ± 2.63 years old) and 16 postmenopausal (57 ± 4.79 years old) healthy and sedentary women were submitted to a continuous ergospirometric incremental test on an electromagnetic braking cycloergometer with 10 to 20 W/min increases until physical exhaustion. The ventilatory variables were recorded breath-to-breath and HR was obtained beat-to-beat over real time. Data were analyzed by the nonparametric Friedman test and Spearman correlation test with the level of significance set at 5%. Power output (W), HR (bpm), oxygen uptake (VO2; mL kg-1 min-1), VO2 (mL/min), VCO2 (mL/min), and minute ventilation (VE; L/min) data observed at the AT level were similar for both methods and groups studied (P > 0.05). The VO2 (mL kg-1 min-1) data showed significant correlation (P < 0.05) between the gold standard method and the mathematical model when applied to HR (r s = 0.75) and VCO2 (r s = 0.78) data for the subjects as a whole (N = 29). The proposed mathematical method for the detection of changes in response patterns of VCO2 and HR was adequate and promising for AT detection in young and middle-aged women, representing a semi-automatic, non-invasive and objective AT measurement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objectives of this study were to evaluate and compare the use of linear and nonlinear methods for analysis of heart rate variability (HRV) in healthy subjects and in patients after acute myocardial infarction (AMI). Heart rate (HR) was recorded for 15 min in the supine position in 10 patients with AMI taking β-blockers (aged 57 ± 9 years) and in 11 healthy subjects (aged 53 ± 4 years). HRV was analyzed in the time domain (RMSSD and RMSM), the frequency domain using low- and high-frequency bands in normalized units (nu; LFnu and HFnu) and the LF/HF ratio and approximate entropy (ApEn) were determined. There was a correlation (P < 0.05) of RMSSD, RMSM, LFnu, HFnu, and the LF/HF ratio index with the ApEn of the AMI group on the 2nd (r = 0.87, 0.65, 0.72, 0.72, and 0.64) and 7th day (r = 0.88, 0.70, 0.69, 0.69, and 0.87) and of the healthy group (r = 0.63, 0.71, 0.63, 0.63, and 0.74), respectively. The median HRV indexes of the AMI group on the 2nd and 7th day differed from the healthy group (P < 0.05): RMSSD = 10.37, 19.95, 24.81; RMSM = 23.47, 31.96, 43.79; LFnu = 0.79, 0.79, 0.62; HFnu = 0.20, 0.20, 0.37; LF/HF ratio = 3.87, 3.94, 1.65; ApEn = 1.01, 1.24, 1.31, respectively. There was agreement between the methods, suggesting that these have the same power to evaluate autonomic modulation of HR in both AMI patients and healthy subjects. AMI contributed to a reduction in cardiac signal irregularity, higher sympathetic modulation and lower vagal modulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The DNA extraction is a critical step in Genetically Modified Organisms analysis based on real-time PCR. In this study, the CTAB and DNeasy methods provided good quality and quantity of DNA from the texturized soy protein, infant formula, and soy milk samples. Concerning the Certified Reference Material consisting of 5% Roundup Ready® soybean, neither method yielded DNA of good quality. However, the dilution test applied in the CTAB extracts showed no interference of inhibitory substances. The PCR efficiencies of lectin target amplification were not statistically different, and the coefficients of correlation (R²) demonstrated high degree of correlation between the copy numbers and the threshold cycle (Ct) values. ANOVA showed suitable adjustment of the regression and absence of significant linear deviations. The efficiencies of the p35S amplification were not statistically different, and all R² values using DNeasy extracts were above 0.98 with no significant linear deviations. Two out of three R² values using CTAB extracts were lower than 0.98, corresponding to lower degree of correlation, and the lack-of-fit test showed significant linear deviation in one run. The comparative analysis of the Ct values for the p35S and lectin targets demonstrated no statistical significant differences between the analytical curves of each target.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study aimed at comparing both the results of wheat flour quality assessed by the new equipment Wheat Gluten Quality Analyser (WGQA) and those obtained by the extensigraph and farinograph. Fifty-nine wheat samples were evaluated for protein and gluten contents; the rheological properties of gluten and wheat flour were assessed using the WGQA and the extensigraph/farinograph methods, respectively, in addition to the baking test. Principal component analysis (PCA) and linear regression were used to evaluate the results. The parameters of energy and maximum resistance to extension determined by the extensigraph and WGQA showed an acceptable level for the linear correlation within the range from 0.6071 to 0.6511. The PCA results obtained using WGQA and the other rheological apparatus showed values similar to those expected for wheat flours in the baking test. Although all equipment used was effective in assessing the behavior of strong and weak flours, the results of medium strength wheat flour varied. WGQA has shown to use less amount of sample and to be faster and easier to use in relation to the other instruments used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we look at two different 1-dimensional quantum systems. The potentials for these systems are a linear potential in an infinite well and an inverted harmonic oscillator in an infinite well. We will solve the Schrödinger equation for both of these systems and get the energy eigenvalues and eigenfunctions. The solutions are obtained by using the boundary conditions and numerical methods. The motivation for our study comes from experimental background. For the linear potential we have two different boundary conditions. The first one is the so called normal boundary condition in which the wave function goes to zero on the edge of the well. The second condition is called derivative boundary condition in which the derivative of the wave function goes to zero on the edge of the well. The actual solutions are Airy functions. In the case of the inverted oscillator the solutions are parabolic cylinder functions and they are solved only using the normal boundary condition. Both of the potentials are compared with the particle in a box solutions. We will also present figures and tables from which we can see how the solutions look like. The similarities and differences with the particle in a box solution are also shown visually. The figures and calculations are done using mathematical software. We will also compare the linear potential to a case where the infinite wall is only on the left side. For this case we will also show graphical information of the different properties. With the inverted harmonic oscillator we will take a closer look at the quantum mechanical tunneling. We present some of the history of the quantum tunneling theory, its developers and finally we show the Feynman path integral theory. This theory enables us to get the instanton solutions. The instanton solutions are a way to look at the tunneling properties of the quantum system. The results are compared with the solutions of the double-well potential which is very similar to our case as a quantum system. The solutions are obtained using the same methods which makes the comparison relatively easy. All in all we consider and go through some of the stages of the quantum theory. We also look at the different ways to interpret the theory. We also present the special functions that are needed in our solutions, and look at the properties and different relations to other special functions. It is essential to notice that it is possible to use different mathematical formalisms to get the desired result. The quantum theory has been built for over one hundred years and it has different approaches. Different aspects make it possible to look at different things.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we look at two different 1-dimensional quantum systems. The potentials for these systems are a linear potential in an infinite well and an inverted harmonic oscillator in an infinite well. We will solve the Schrödinger equation for both of these systems and get the energy eigenvalues and eigenfunctions. The solutions are obtained by using the boundary conditions and numerical methods. The motivation for our study comes from experimental background. For the linear potential we have two different boundary conditions. The first one is the so called normal boundary condition in which the wave function goes to zero on the edge of the well. The second condition is called derivative boundary condition in which the derivative of the wave function goes to zero on the edge of the well. The actual solutions are Airy functions. In the case of the inverted oscillator the solutions are parabolic cylinder functions and they are solved only using the normal boundary condition. Both of the potentials are compared with the particle in a box solutions. We will also present figures and tables from which we can see how the solutions look like. The similarities and differences with the particle in a box solution are also shown visually. The figures and calculations are done using mathematical software. We will also compare the linear potential to a case where the infinite wall is only on the left side. For this case we will also show graphical information of the different properties. With the inverted harmonic oscillator we will take a closer look at the quantum mechanical tunneling. We present some of the history of the quantum tunneling theory, its developers and finally we show the Feynman path integral theory. This theory enables us to get the instanton solutions. The instanton solutions are a way to look at the tunneling properties of the quantum system. The results are compared with the solutions of the double-well potential which is very similar to our case as a quantum system. The solutions are obtained using the same methods which makes the comparison relatively easy. All in all we consider and go through some of the stages of the quantum theory. We also look at the different ways to interpret the theory. We also present the special functions that are needed in our solutions, and look at the properties and different relations to other special functions. It is essential to notice that it is possible to use different mathematical formalisms to get the desired result. The quantum theory has been built for over one hundred years and it has different approaches. Different aspects make it possible to look at different things.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays problem of solving sparse linear systems over the field GF(2) remain as a challenge. The popular approach is to improve existing methods such as the block Lanczos method (the Montgomery method) and the Wiedemann-Coppersmith method. Both these methods are considered in the thesis in details: there are their modifications and computational estimation for each process. It demonstrates the most complicated parts of these methods and gives the idea how to improve computations in software point of view. The research provides the implementation of accelerated binary matrix operations computer library which helps to make the progress steps in the Montgomery and in the Wiedemann-Coppersmith methods faster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Behavioral researchers commonly use single subject designs to evaluate the effects of a given treatment. Several different methods of data analysis are used, each with their own set of methodological strengths and limitations. Visual inspection is commonly used as a method of analyzing data which assesses the variability, level, and trend both within and between conditions (Cooper, Heron, & Heward, 2007). In an attempt to quantify treatment outcomes, researchers developed two methods for analysing data called Percentage of Non-overlapping Data Points (PND) and Percentage of Data Points Exceeding the Median (PEM). The purpose of the present study is to compare and contrast the use of Hierarchical Linear Modelling (HLM), PND and PEM in single subject research. The present study used 39 behaviours, across 17 participants to compare treatment outcomes of a group cognitive behavioural therapy program, using PND, PEM, and HLM on three response classes of Obsessive Compulsive Behaviour in children with Autism Spectrum Disorder. Findings suggest that PEM and HLM complement each other and both add invaluable information to the overall treatment results. Future research should consider using both PEM and HLM when analysing single subject designs, specifically grouped data with variability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of multivariate regression (MLR) and seemingly unrelated regressions (SURE) models, it is well known that commonly employed asymptotic test criteria are seriously biased towards overrejection. in this paper, we propose finite-and large-sample likelihood-based test procedures for possibly non-linear hypotheses on the coefficients of MLR and SURE systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.