995 resultados para Linear transverse waves
Resumo:
Various researches in the field of econophysics has shown that fluid flow have analogous phenomena in financial market behavior, the typical parallelism being delivered between energy in fluids and information on markets. However, the geometry of the manifold on which market dynamics act out their dynamics (corporate space) is not yet known. In this thesis, utilizing a Seven year time series of prices of stocks used to compute S&P500 index on the New York Stock Exchange, we have created local chart to the corporate space with the goal of finding standing waves and other soliton like patterns in the behavior of stock price deviations from the S&P500 index. By first calculating the correlation matrix of normalized stock price deviations from the S&P500 index, we have performed a local singular value decomposition over a set of four different time windows as guides to the nature of patterns that may emerge. I turns out that in almost all cases, each singular vector is essentially determined by relatively small set of companies with big positive or negative weights on that singular vector. Over particular time windows, sometimes these weights are strongly correlated with at least one industrial sector and certain sectors are more prone to fast dynamics whereas others have longer standing waves.
Resumo:
In this study, finite element analyses and experimental tests are carried out in order to investigate the effect of loading type and symmetry on the fatigue strength of three different non-load carrying welded joints. The current codes and recommendations do not give explicit instructions how to consider degree of bending in loading and the effect of symmetry in the fatigue assessment of welded joints. The fatigue assessment is done by using effective notch stress method and linear elastic fracture mechanics. Transverse attachment and cover plate joints are analyzed by using 2D plane strain element models in FEMAP/NxNastran and Franc2D software and longitudinal gusset case is analyzed by using solid element models in Abaqus and Abaqus/XFEM software. By means of the evaluated effective notch stress range and stress intensity factor range, the nominal fatigue strength is assessed. Experimental tests consist of the fatigue tests of transverse attachment joints with total amount of 12 specimens. In the tests, the effect of both loading type and symmetry on the fatigue strength is studied. Finite element analyses showed that the fatigue strength of asymmetric joint is higher in tensile loading and the fatigue strength of symmetric joint is higher in bending loading in terms of nominal and hot spot stress methods. Linear elastic fracture mechanics indicated that bending reduces stress intensity factors when the crack size is relatively large since the normal stress decreases at the crack tip due to the stress gradient. Under tensile loading, experimental tests corresponded with finite element analyzes. Still, the fatigue tested joints subjected to bending showed the bending increased the fatigue strength of non-load carrying welded joints and the fatigue test results did not fully agree with the fatigue assessment. According to the results, it can be concluded that in tensile loading, the symmetry of joint distinctly affects on the fatigue strength. The fatigue life assessment of bending loaded joints is challenging since it depends on whether the crack initiation or propagation is predominant.
Resumo:
Nowadays problem of solving sparse linear systems over the field GF(2) remain as a challenge. The popular approach is to improve existing methods such as the block Lanczos method (the Montgomery method) and the Wiedemann-Coppersmith method. Both these methods are considered in the thesis in details: there are their modifications and computational estimation for each process. It demonstrates the most complicated parts of these methods and gives the idea how to improve computations in software point of view. The research provides the implementation of accelerated binary matrix operations computer library which helps to make the progress steps in the Montgomery and in the Wiedemann-Coppersmith methods faster.
Resumo:
Linear alkylbenzenes, LAB, formed by the Alel3 or HF catalyzed alkylation of benzene are common raw materials for surfactant manufacture. Normally they are sulphonated using S03 or oleum to give the corresponding linear alkylbenzene sulphonates In >95 % yield. As concern has grown about the environmental impact of surfactants,' questions have been raised about the trace levels of unreacted raw materials, linear alkylbenzenes and minor impurities present in them. With the advent of modem analytical instruments and techniques, namely GCIMS, the opportunity has arisen to identify the exact nature of these impurities and to determine the actual levels of them present in the commercial linear ,alkylbenzenes. The object of the proposed study was to separate, identify and quantify major and minor components (1-10%) in commercial linear alkylbenzenes. The focus of this study was on the structure elucidation and determination of impurities and on the qualitative determination of them in all analyzed linear alkylbenzene samples. A gas chromatography/mass spectrometry, (GCIMS) study was performed o~ five samples from the same manufacturer (different production dates) and then it was followed by the analyses of ten commercial linear alkylbenzenes from four different suppliers. All the major components, namely linear alkylbenzene isomers, followed the same elution pattern with the 2-phenyl isomer eluting last. The individual isomers were identified by interpretation of their electron impact and chemical ionization mass spectra. The percent isomer distribution was found to be different from sample to sample. Average molecular weights were calculated using two methods, GC and GCIMS, and compared with the results reported on the Certificate of Analyses (C.O.A.) provided by the manufacturers of commercial linear alkylbenzenes. The GC results in most cases agreed with the reported values, whereas GC/MS results were significantly lower, between 0.41 and 3.29 amu. The minor components, impurities such as branched alkylbenzenes and dialkyltetralins eluted according to their molecular weights. Their fragmentation patterns were studied using electron impact ionization mode and their molecular weight ions confirmed by a 'soft ionization technique', chemical ionization. The level of impurities present i~ the analyzed commercial linear alkylbenzenes was expressed as the percent of the total sample weight, as well as, in mg/g. The percent of impurities was observed to vary between 4.5 % and 16.8 % with the highest being in sample "I". Quantitation (mg/g) of impurities such as branched alkylbenzenes and dialkyltetralins was done using cis/trans-l,4,6,7-tetramethyltetralin as an internal standard. Samples were analyzed using .GC/MS system operating under full scan and single ion monitoring data acquisition modes. The latter data acquisition mode, which offers higher sensitivity, was used to analyze all samples under investigation for presence of linear dialkyltetralins. Dialkyltetralins were reported quantitatively, whereas branched alkylbenzenes were reported semi-qualitatively. The GC/MS method that was developed during the course of this study allowed identification of some other trace impurities present in commercial LABs. Compounds such as non-linear dialkyltetralins, dialkylindanes, diphenylalkanes and alkylnaphthalenes were identified but their detailed structure elucidation and the quantitation was beyond the scope of this study. However, further investigation of these compounds will be the subject of a future study.
Resumo:
Low temperature (77K) linear dichroism spectroscopy was used to characterize pigment orientation changes accompanying the light state transition in the cyanobacterium, Synechococcus sp. pee 6301, and cold-hardening in winter rye (Secale cereale L. cv. Puma). Samples were oriented for spectroscopy using the gel squeezing method (Abdourakhmanov et aI., 1979) and brought to 77K in liquid nitrogen. The linear dichroism (LD) spectra of Synechococcus 6301 phycobilisome/thylakoid membrane fragments cross-linked in light state 1 and light state 2 with glutaraldehyde showed differences in both chlorophyll a and phycobilin orientation. A decrease in the relative amplitude of the 681nm chlorophyll a positive LD peak was observed in membrane fragments in state 2. Reorientation of the phycobilisome (PBS) during the transition to state 2 resulted in an increase in core allophycocyanin absorption parallel to the membrane, and a decrease in rod phycocyanin parallel absorption. This result supports the "spillover" and "PBS detachment" models of the light state transition in PBS-containing organisms, but not the "mobile PBS" model. A model was proposed for PBS reorientation upon transition to state 2, consisting of a tilt in the antenna complex with respect to the membrane plane. Linear dichroism spectra of PBS/thylakoid fragments from the red alga, Porphyridium cruentum, grown in green light (containing relatively more PSI) and red light (containing relatively more PSll) were compared to identify chlorophyll a absorption bands associated with each photosystem. Spectra from red light - grown samples had a larger positive LD signal on the short wavelength side of the 686nm chlorophyll a peak than those from green light - grown fragments. These results support the identification of the difference in linear dichroism seen at 681nm in Synechococcus spectra as a reorientation of PSll chromophores. Linear dichroism spectra were taken of thylakoid membranes isolated from winter rye grown at 20°C (non-hardened) and 5°C (cold-hardened). Differences were seen in the orientation of chlorophyll b relative to chlorophyll a. An increase in parallel absorption was identified at the long-wavelength chlorophyll a absorption peak, along with a decrease in parallel absorption from chlorophyll b chromophores. The same changes in relative pigment orientation were seen in the LD of isolated hardened and non-hardened light-harvesting antenna complexes (LHCII). It was concluded that orientational differences in LHCII pigments were responsible for thylakoid LD differences. Changes in pigment orientation, along with differences observed in long-wavelength absorption and in the overall magnitude of LD in hardened and non-hardened complexes, could be explained by the higher LHCII monomer:oligomer ratio in hardened rye (Huner et ai., 1987) if differences in this ratio affect differential light scattering properties, or fluctuation of chromophore orientation in the isolated LHCII sample.
Resumo:
Behavioral researchers commonly use single subject designs to evaluate the effects of a given treatment. Several different methods of data analysis are used, each with their own set of methodological strengths and limitations. Visual inspection is commonly used as a method of analyzing data which assesses the variability, level, and trend both within and between conditions (Cooper, Heron, & Heward, 2007). In an attempt to quantify treatment outcomes, researchers developed two methods for analysing data called Percentage of Non-overlapping Data Points (PND) and Percentage of Data Points Exceeding the Median (PEM). The purpose of the present study is to compare and contrast the use of Hierarchical Linear Modelling (HLM), PND and PEM in single subject research. The present study used 39 behaviours, across 17 participants to compare treatment outcomes of a group cognitive behavioural therapy program, using PND, PEM, and HLM on three response classes of Obsessive Compulsive Behaviour in children with Autism Spectrum Disorder. Findings suggest that PEM and HLM complement each other and both add invaluable information to the overall treatment results. Future research should consider using both PEM and HLM when analysing single subject designs, specifically grouped data with variability.
Resumo:
In a linear production model, we characterize the class of efficient and strategy-proof allocation functions, and the class of efficient and coalition strategy-proof allocation functions. In the former class, requiring equal treatment of equals allows us to identify a unique allocation function. This function is also the unique member of the latter class which satisfies uniform treatment of uniforms.
Resumo:
In the literature on tests of normality, much concern has been expressed over the problems associated with residual-based procedures. Indeed, the specialized tables of critical points which are needed to perform the tests have been derived for the location-scale model; hence reliance on available significance points in the context of regression models may cause size distortions. We propose a general solution to the problem of controlling the size normality tests for the disturbances of standard linear regression, which is based on using the technique of Monte Carlo tests.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
In this paper, we study the asymptotic distribution of a simple two-stage (Hannan-Rissanen-type) linear estimator for stationary invertible vector autoregressive moving average (VARMA) models in the echelon form representation. General conditions for consistency and asymptotic normality are given. A consistent estimator of the asymptotic covariance matrix of the estimator is also provided, so that tests and confidence intervals can easily be constructed.
Resumo:
We derive conditions that must be satisfied by the primitives of the problem in order for an equilibrium in linear Markov strategies to exist in some common property natural resource differential games. These conditions impose restrictions on the admissible form of the natural growth function, given a benefit function, or on the admissible form of the benefit function, given a natural growth function.
Resumo:
Affiliation: Institut de recherche en immunologie et en cancérologie, Université de Montréal