922 resultados para Earnings Variability
Resumo:
Crystalline complexes of succinic acid with DL- and L-lysine have been prepared and analysed by X-ray diffraction. DL-Lysine complex: C6HIsN202 + 1 2- 1 ~C4H404 .~C4H604, Mr -- 264"2, PI, a = 5"506 (4), =8.070(2), c=14.089(2) A,, a=92.02(1), /3= 100"69 (3), y = 95"85 (3) ~>, Z = 2, Dx = 1"44 g cm -3, R = 0.059 for 2546 observed reflections. Form I of the e-lysine complex: C6HIsN20-, ~ .C4H504, Mr = 264.2, P1, a = 5" 125 (2), b = 8"087 (1), c = 8"689 (1) A,, a = 112.06 (1), /3 = 99.08 (2), y = 93"77(2) °, Z--l, D,,,=1"34(3), Dx=l"34gcm 3 R = 0.033 for 1475 observed reflections. Form II of + I 2- the e-lysine complex: C6H15N202 .,iC4H404 .- 1 I ") 4C4H604.4(C4HsO4""H'"CaH404)" , Mr = 264"2, P1, a = 10.143 (4), b = 10.256 (2), c = 12"916 (3) A,, a = 105.00 (2),/3 = 99-09 (3), y = 92"78 (3)::, Z = 4, Dm= 1"37(4), D,.= 1.38gcm 3, R=0.067 for 2809 observed reflections. The succinic acid molecules in the structures exhibit a variety of ionization states. Two of the lysine conformations found in the complexes have been observed for the first time in crystals containing lysine. Form II of the L-lysine complex is highly pseudosymmetric. In all the complexes, unlike molecules aggregate into separate alternating layers. The basic element of aggregation in the lysine layer in the complexes is an S2-type head-to-tail sequence. This element combines in different ways in the three structures. The basic element of aggre gation in the succinic acid layer in the complexes is a hydrogen-bonded ribbon. The ribbons are interconnected indirectly through amino groups in the lysine layer.
Resumo:
Site-specific geotechnical data are always random and variable in space. In the present study, a procedure for quantifying the variability in geotechnical characterization and design parameters is discussed using the site-specific cone tip resistance data (qc) obtained from static cone penetration test (SCPT). The parameters for the spatial variability modeling of geotechnical parameters i.e. (i) existing trend function in the in situ qc data; (ii) second moment statistics i.e. analysis of mean, variance, and auto-correlation structure of the soil strength and stiffness parameters; and (iii) inputs from the spatial correlation analysis, are utilized in the numerical modeling procedures using the finite difference numerical code FLAC 5.0. The influence of consideration of spatially variable soil parameters on the reliability-based geotechnical deign is studied for the two cases i.e. (a) bearing capacity analysis of a shallow foundation resting on a clayey soil, and (b) analysis of stability and deformation pattern of a cohesive-frictional soil slope. The study highlights the procedure for conducting a site-specific study using field test data such as SCPT in geotechnical analysis and demonstrates that a few additional computations involving soil variability provide a better insight into the role of variability in designs.
Resumo:
In recent years, spatial variability modeling of soil parameters using random field theory has gained distinct importance in geotechnical analysis. In the present Study, commercially available finite difference numerical code FLAC 5.0 is used for modeling the permeability parameter as spatially correlated log-normally distributed random variable and its influence on the steady state seepage flow and on the slope stability analysis are studied. Considering the case of a 5.0 m high cohesive-frictional soil slope of 30 degrees, a range of coefficients of variation (CoV%) from 60 to 90% in the permeability Values, and taking different values of correlation distance in the range of 0.5-15 m, parametric studies, using Monte Carlo simulations, are performed to study the following three aspects, i.e., (i) effect ostochastic soil permeability on the statistics of seepage flow in comparison to the analytic (Dupuit's) solution available for the uniformly constant permeability property; (ii) strain and deformation pattern, and (iii) stability of the given slope assessed in terms of factor of safety (FS). The results obtained in this study are useful to understand the role of permeability variations in slope stability analysis under different slope conditions and material properties. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Airway inflammation is a key feature of bronchial asthma. In asthma management, according to international guidelines, the gold standard is anti-inflammatory treatment. Currently, only conventional procedures (i.e., symptoms, use of rescue medication, PEF-variability, and lung function tests) were used to both diagnose and evaluate the results of treatment with anti-inflammatory drugs. New methods for evaluation of degree of airway inflammation are required. Nitric oxide (NO) is a gas which is produced in the airways of healthy subjects and especially produced in asthmatic airways. Measurement of NO from the airways is possible, and NO can be measured from exhaled air. Fractional exhaled NO (FENO) is increased in asthma, and the highest concentrations are measured in asthmatic patients not treated with inhaled corticosteroids (ICS). Steroid-treated patients with asthma had levels of FENO similar to those of healthy controls. Atopic asthmatics had higher levels of FENO than did nonatopic asthmatics, indicating that level of atopy affected FENO level. Associations between FENO and bronchial hyperresponsiveness (BHR) occur in asthma. The present study demonstrated that measurement of FENO had good reproducibility, and the FENO variability was reasonable both short- and long-term in both healthy subjects and patients with respiratory symptoms or asthma. We demonstrated the upper normal limit for healthy subjects, which was 12 ppb calculated from two different healthy study populations. We showed that patients with respiratory symptoms who did not fulfil the diagnostic criteria of asthma had FENO values significantly higher than in healthy subjects, but significantly lower than in asthma patients. These findings suggest that BHR to histamine is a sensitive indicator of the effect of ICS and a valuable tool for adjustment of corticosteroid treatment in mild asthma. The findings further suggest that intermittent treatment periods of a few weeks’ duration are insufficient to provide long-term control of BHR in patients with mild persistent asthma. Moreover, during the treatment with ICS changes in BHR and changes in FENO were associated. FENO level was associated with BHR measured by a direct (histamine challenge) or indirect method (exercise challenge) in steroid-naïve symptomatic, non-smoking asthmatics. Although these associations could be found only in atopics, FENO level in nonatopic asthma was also increased. It can thus be concluded that assessment of airway inflammation by measuring FENO can be useful for clinical purposes. The methodology of FENO measurements is now validated. Especially in those patients with respiratory symptoms who did not fulfil the diagnostic criteria of asthma, FENO measurement can aid in treatment decisions. Serial measurement of FENO during treatment with ICS can be a complementary or an alternative method for evaluation in patients with asthma.
Resumo:
A generalized technique is proposed for modeling the effects of process variations on dynamic power by directly relating the variations in process parameters to variations in dynamic power of a digital circuit. The dynamic power of a 2-input NAND gate is characterized by mixed-mode simulations, to be used as a library element for 65mn gate length technology. The proposed methodology is demonstrated with a multiplier circuit built using the NAND gate library, by characterizing its dynamic power through Monte Carlo analysis. The statistical technique of Response. Surface Methodology (RSM) using Design of Experiments (DOE) and Least Squares Method (LSM), are employed to generate a "hybrid model" for gate power to account for simultaneous variations in multiple process parameters. We demonstrate that our hybrid model based statistical design approach results in considerable savings in the power budget of low power CMOS designs with an error of less than 1%, with significant reductions in uncertainty by atleast 6X on a normalized basis, against worst case design.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
One of the objectives of general-purpose financial reporting is to provide information about the financial position, financial performance and cash flows of an entity that is useful to a wide range of users in making economic decisions. The current focus on potentially increased relevance of fair value accounting weighed against issues of reliability has failed to consider the potential impact on the predictive ability of accounting. Based on a sample of international (non-U.S.) banks from 24 countries during 2009-2012, we test the usefulness of fair values in improving the predictive ability of earnings. First, we find that the increasing use of fair values on balance-sheet financial instruments enhances the ability of current earnings to predict future earnings and cash flows. Second, we provide evidence that the fair value hierarchy classification choices affect the ability of earnings to predict future cash flows and future earnings. More precisely, we find that the non-discretionary fair value component (Level 1 assets) improves the predictability of current earnings whereas the discretionary fair value components (Level 2 and Level 3 assets) weaken the predictive power of earnings. Third, we find a consistent and strong association between factors reflecting country-wide institutional structures and predictive power of fair values based on discretionary measurement inputs (Level 2 and Level 3 assets and liabilities). Our study is timely and relevant. The findings have important implications for standard setters and contribute to the debate on the use of fair value accounting.
Resumo:
1] The poor predictability of the Indian summer monsoon ( ISM) appears to be due to the fact that a large fraction of interannual variability (IAV) is governed by unpredictable "internal'' low frequency variations. Mechanisms responsible for the internal IAV of the monsoon have not been clearly identified. Here, an attempt has been made to gain insight regarding the origin of internal IAV of the seasonal ( June - September, JJAS) mean rainfall from "internal'' IAV of the ISM simulated by an atmospheric general circulation model (AGCM) driven by fixed annual cycle of sea surface temperature (SST). The underlying hypothesis that monsoon ISOs are responsible for internal IAV of the ISM is tested. The spatial and temporal characteristics of simulated summer intraseasonal oscillations ( ISOs) are found to be in good agreement with those observed. A long integration with the AGCM forced with observed SST, shows that ISO activity over the Asian monsoon region is not modulated by the observed SST variations. The internal IAV of ISM, therefore, appears to be decoupled from external IAV. Hence, insight gained from this study may be useful in understanding the observed internal IAV of ISM. The spatial structure of the ISOs has a significant projection on the spatial structure of the seasonal mean and a common spatial mode governs both intraseasonal and interannual variability. Statistical average of ISO anomalies over the season ( seasonal ISO bias) strengthens or weakens the seasonal mean. It is shown that interannual anomalies of seasonal mean are closely related to the seasonal mean of intraseasonal anomalies and explain about 50% of the IAV of the seasonal mean. The seasonal mean ISO bias arises partly due to the broad-band nature of the ISO spectrum allowing the time series to be aperiodic over the season and partly due to a non-linear process where the amplitude of ISO activity is proportional to the seasonal bias of ISO anomalies. The later relation is a manifestation of the binomial character of rainfall time series. The remaining 50% of the IAV may arise due to land-surface processes, interaction between high frequency variability and ISOs, etc.
Resumo:
Two algorithms are outlined, each of which has interesting features for modeling of spatial variability of rock depth. In this paper, reduced level of rock at Bangalore, India, is arrived from the 652 boreholes data in the area covering 220 sqa <.km. Support vector machine (SVM) and relevance vector machine (RVM) have been utilized to predict the reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth. The support vector machine (SVM) that is firmly based on the theory of statistical learning theory uses regression technique by introducing epsilon-insensitive loss function has been adopted. RVM is a probabilistic model similar to the widespread SVM, but where the training takes place in a Bayesian framework. Prediction results show the ability of learning machine to build accurate models for spatial variability of rock depth with strong predictive capabilities. The paper also highlights the capability ofRVM over the SVM model.
Resumo:
Species specific LTR retrotransposons were first cloned in five rare relic species of drug plants located in the Perm’ region. Sequences of LTR retrotransposons were used for PCR analysis based on amplification of repeated sequences from LTR or other sites of retrotransposons (IRAP). Genetic diversity was studied in six populations of rare relic species of plants Adonis vernalis L. by means of the IRAP method; 125 polymorphic IRAP markers were analyzed. Parameters for DNA polymorphism and genetic diversity of A. vernalis populations were determined.
Resumo:
Species specific LTR retrotransposons were first cloned in five rare relic species of drug plants located in the Perm’ region. Sequences of LTR retrotransposons were used for PCR analysis based on amplification of repeated sequences from LTR or other sites of retrotransposons (IRAP). Genetic diversity was studied in six populations of rare relic species of plants Adonis vernalis L. by means of the IRAP method; 125 polymorphic IRAP markers were analyzed. Parameters for DNA polymorphism and genetic diversity of A. vernalis populations were determined.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
There is much literature developing theories when and where earnings management occurs. Among the several possible motives driving earnings management behaviour in firms, this thesis focuses on motives that aim to influence the valuation of the firm. Earnings management that makes the firm look better than it really is may result in disappointment for the single investor and potentially leads to a welfare loss in society when the resource allocation is distorted. A more specific knowledge of the occurrence of earnings management supposedly increases the awareness of the investor and thus leads to better investments and increased welfare. This thesis contributes to the literature by increasing the knowledge as to where and when earnings management is likely to occur. More specifically, essay 1 adds to existing research connecting earnings management to IPOs and increases the knowledge in arguing that the tendency to manage earnings differs between the IPOs. Evidence is found that entrepreneur owned IPOs are more likely to be earnings managers than the institutionally owned ones. Essay 2 considers the reliability of quarterly earnings reports that precedes insider selling binges. The essay contributes by suggesting that earnings management is likely to occur before high insider selling. Essay 3 examines the widely studied phenomenon of income smoothing and investigates if income smoothing can be explained with proxies for information asymmetry. The essay argues that smoothing is more pervasive in private and smaller firms.
Resumo:
A growing body of empirical research examines the structure and effectiveness of corporate governance systems around the world. An important insight from this literature is that corporate governance mechanisms address the excessive use of managerial discretionary powers to get private benefits by expropriating the value of shareholders. One possible way of expropriation is to reduce the quality of disclosed earnings by manipulating the financial statements. This lower quality of earnings should then be reflected by the stock price of firm according to value relevance theorem. Hence, instead of testing the direct effect of corporate governance on the firm’s market value, it is important to understand the causes of the lower quality of accounting earnings. This thesis contributes to the literature by increasing knowledge about the extent of the earnings management – measured as the extent of discretionary accruals in total disclosed earnings - and its determinants across the Transitional European countries. The thesis comprises of three essays of empirical analysis of which first two utilize the data of Russian listed firms whereas the third essay uses data from 10 European economies. More specifically, the first essay adds to existing research connecting earnings management to corporate governance. It testifies the impact of the Russian corporate governance reforms of 2002 on the quality of disclosed earnings in all publicly listed firms. This essay provides empirical evidence of the fact that the desired impact of reforms is not fully substantiated in Russia without proper enforcement. Instead, firm-level factors such as long-term capital investments and compliance with International financial reporting standards (IFRS) determine the quality of the earnings. The result presented in the essay support the notion proposed by Leuz et al. (2003) that the reforms aimed to bring transparency do not correspond to desired results in economies where investor protection is lower and legal enforcement is weak. The second essay focuses on the relationship between the internal-control mechanism such as the types and levels of ownership and the quality of disclosed earnings in Russia. The empirical analysis shows that the controlling shareholders in Russia use their powers to manipulate the reported performance in order to get private benefits of control. Comparatively, firms owned by the State have significantly better quality of disclosed earnings than other controllers such as oligarchs and foreign corporations. Interestingly, market performance of firms controlled by either State or oligarchs is better than widely held firms. The third essay provides useful evidence on the fact that both ownership structures and economic characteristics are important factors in determining the quality of disclosed earnings in three groups of countries in Europe. Evidence suggests that ownership structure is a more important determinant in developed and transparent countries, while economic determinants are important determinants in developing and transitional countries.
Resumo:
In this paper, we propose a novel and efficient algorithm for modelling sub-65 nm clock interconnect-networks in the presence of process variation. We develop a method for delay analysis of interconnects considering the impact of Gaussian metal process variations. The resistance and capacitance of a distributed RC line are expressed as correlated Gaussian random variables which are then used to compute the standard deviation of delay Probability Distribution Function (PDF) at all nodes in the interconnect network. Main objective is to find delay PDF at a cheaper cost. Convergence of this approach is in probability distribution but not in mean of delay. We validate our approach against SPICE based Monte Carlo simulations while the current method entails significantly lower computational cost.