938 resultados para Currency forecast errors
Resumo:
Software simulation models are computer programs that need to be verified and debugged like any other software. In previous work, a method for error isolation in simulation models has been proposed. The method relies on a set of feature matrices that can be used to determine which part of the model implementation is responsible for deviations in the output of the model. Currrently these feature matrices have to be generated by hand from the model implementation, which is a tedious and error-prone task. In this paper, a method based on mutation analysis, as well as prototype tool support for the verification of the manually generated feature matrices is presented. The application of the method and tool to a model for wastewater treatment shows that the feature matrices can be verified effectively using a minimal number of mutants.
Resumo:
Based on a simple convexity lemma, we develop bounds for different types of Bayesian prediction errors for regression with Gaussian processes. The basic bounds are formulated for a fixed training set. Simpler expressions are obtained for sampling from an input distribution which equals the weight function of the covariance kernel, yielding asymptotically tight results. The results are compared with numerical experiments.
Resumo:
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study,GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk,despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
Resumo:
Purpose: To evaluate the effects of instrument realignment and angular misalignment during the clinical determination of wavefront aberrations by simulation in model eyes. Setting: Aston Academy of Life Sciences, Aston University, Birmingham, United Kingdom. Methods: Six model eyes were examined with wavefront-aberration-supported cornea ablation (WASCA) (Carl Zeiss Meditec) in 4 sessions of 10 measurements each: sessions 1 and 2, consecutive repeated measures without realignment; session 3, realignment of the instrument between readings; session 4, measurements without realignment but with the model eye shifted 6 degrees angularly. Intersession repeatability and the effects of realignment and misalignment were obtained by comparing the measurements in the various sessions for coma, spherical aberration, and higher-order aberrations (HOAs). Results: The mean differences between the 2 sessions without realignment of the instrument were 0.020 μm ± 0.076 (SD) for Z3 - 1(P = .551), 0.009 ± 0.139 μm for Z3 1(P = .877), 0.004 ± 0.037 μm for Z4 0 (P = .820), and 0.005 ± 0.01 μm for HO root mean square (RMS) (P = .301). Differences between the nonrealigned and realigned instruments were -0.017 ± 0.026 μm for Z3 - 1(P = .159), 0.009 ± 0.028 μm for Z3 1 (P = .475), 0.007 ± 0.014 μm for Z4 0(P = .296), and 0.002 ± 0.007 μm for HO RMS (P = 0.529; differences between centered and misaligned instruments were -0.355 ± 0.149 μm for Z3 - 1 (P = .002), 0.007 ± 0.034 μm for Z3 1(P = .620), -0.005 ± 0.081 μm for Z4 0(P = .885), and 0.012 ± 0.020 μm for HO RMS (P = .195). Realignment increased the standard deviation by a factor of 3 compared with the first session without realignment. Conclusions: Repeatability of the WASCA was excellent in all situations tested. Realignment substantially increased the variance of the measurements. Angular misalignment can result in significant errors, particularly in the determination of coma. These findings are important when assessing highly aberrated eyes during follow-up or before surgery. © 2007 ASCRS and ESCRS.
Resumo:
The purpose of this paper is to demonstrate the existence of a strong and significant effect of complexity in aphasia independent from other variables including length. Complexity was found to be a strong and significant predictor of accurate repetition in a group of 13 Italian aphasic patients when it was entered in a regression equation either simultaneously or after a large number of other variables. Significant effects were found both when complexity was measured in terms of number of complex onsets (as in a recent paper by Nickels & Howard, 2004) and when it was measured in a more comprehensive way. Significant complexity effects were also found with matched lists contrasting simple and complex words and in analyses of errors. Effects of complexity, however, were restricted to patients with articulatory difficulties. Reasons for this association and for the lack of significant results in Nickels and Howard (2004) are discussed. © 2005 Psychology Press Ltd.
Resumo:
WWe present the case of two aphasic patients: one with fluent speech, MM, and one with dysfluent speech, DB. Both patients make similar proportions of phonological errors in speech production and the errors have similar characteristics. A closer analysis, however, shows a number of differences. DB's phonological errors involve, for the most part, simplifications of syllabic structure; they affect consonants more than vowels; and, among vowels, they show effects of sonority/complexity. This error pattern may reflect articulatory difficulties. MM's errors, instead, show little effect of syllable structure, affect vowels at least as much as consonants and, and affect all different vowels to a similar extent. This pattern is consistent with a more central impairment involving the selection of the right phoneme among competing alternatives. We propose that, at this level, vowel selection may be more difficult than consonant selection because vowels belong to a smaller set of repeatedly activated units.
Resumo:
The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one technique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.
Phonological–lexical activation:a lexical component or anoutput buffer? Evidence from aphasic errors
Resumo:
Single word production requires that phoneme activation is maintained while articulatory conversion is taking place. Word serial recall, connected speech and non-word production (repetition and spelling) are all assumed to involve a phonological output buffer. A crucial question is whether the same memory resources are also involved in single word production. We investigate this question by assessing length and positional effects in the single word repetition and reading of six aphasic patients. We expect a damaged buffer to result in error rates per phoneme which increase with word length and in position effects. Although our patients had trouble with phoneme activation (they made mainly errors of phoneme selection), they did not show the effects expected from a buffer impairment. These results show that phoneme activation cannot be automatically equated with a buffer. We hypothesize that the phonemes of existing words are kept active though permanent links to the word node. Thus, the sustained activation needed for their articulation will come from the lexicon and will have different characteristics from the activation needed for the short-term retention of an unbound set of units. We conclude that there is no need and no evidence for a phonological buffer in single word production.
Resumo:
This thesis focuses on the theoretical examination of the exchange rate economic (operating) exposure within the context of the theory of the firm, and proposes some hedging solutions using currency options. The examination of economic exposure is based on such parameters as firms' objectives, industry structure and production cost efficiency. In particular, it examines an hypothetical exporting firm with costs in domestic currency, which faces competition from foreign firms in overseas markets and has a market share expansion objective. Within this framework, the hypothesis is established that economic exposure, portrayed in a diagram connecting export prices and real exchange rates, is asymmetric (i.e. the negative effects depreciation are higher than the positive effects of a currency depreciation). In this case, export business can be seen as a real option, given by exporting firms to overseas customer. Different scenarios about the asymmetry hypothesis can be derived for different assumptions about the determinants of economic exposure. Having established the asymmetry hypothesis, the hedging against this exposure is analysed. The hypothesis is established, that a currency call option should be used in hedging against asymmetric economic exposure. Further, some advanced currency options stategies are discussed, and their use in hedging several scenarios of exposure is indicated, establishing the hypothesis that, the optimal options strategy is a function of the determinants of exposure. Some extensions on the theoretical analysis are examined. These include the hedging of multicurrency exposure using options, and the exposure of a purely domestic firm facing import competition. The empirical work addresses two issues: the empirical validity of the asymmetry hypothesis and the examination of the hedging effectiveness of currency options.
Resumo:
This thesis deals with the problems associated with the planning and control of production, with particular reference to a small aluminium die casting company. The main problem areas were identified as: (a) A need to be able to forecast the customers demands upon the company's facilities. (b) A need to produce a manufacturing programme in which the output of the foundry (or die casting section) was balanced with the available capacity in the machine shop. (c) The need to ensure that the resultant system enabled the company's operating budget to have a reasonable chance of being achieved. At the commencement of the research work the major customers were members of the automobile industry and had their own system of forecasting, from which they issued manufacturing schedules to their component suppliers, The errors in the forecast were analysed and the distributions noted. Using these distributions the customer's forecast was capable of being modified to enable his final demand to be met with a known degree of confidence. Before a manufacturing programme could be developed the actual manufacturing system had to be reviewed and it was found that as with many small companies there was a remarkable lack of formal control and written data. Relevant data with regards to the component and the manufacturing process had therefore to be collected and analysed. The foundry process was fixed but the secondary machining operations were analysed by a technique similar to Component Flow Analysis and as a result the machines were arranged in a series of flow lines. A system of manual production control was proposed and for comparison, a local computer bureau was approached and a system proposed incorporating the production of additional management information. These systems are compared and the relative merits discussed and a proposal made for implementation.
Resumo:
Objective - To review and summarise published data on medication errors in older people with mental health problems. Methods - A systematic review was conducted to identify studies that investigated medication errors in older people with mental health problems. MEDLINE, EMBASE, PHARMLINE, COCHRANE COLLABORATION and PsycINFO were searched electronically. Any studies identified were scrutinized for further references. The title, abstract or full text was systematically reviewed for relevance. Results - Data were extracted from eight studies. In total, information about 728 errors (459 administration, 248 prescribing, 7 dispensing, 12 transcribing, 2 unclassified) was available. The dataset related almost exclusively to inpatients, frequently involved non-psychotropics, and the majority of the errors were not serious. Conclusions - Due to methodology issues it was impossible to calculate overall error rates. Future research should concentrate on serious errors within community settings, and clarify potential risk factors.
Resumo:
Optical data communication systems are prone to a variety of processes that modify the transmitted signal, and contribute errors in the determination of 1s from 0s. This is a difficult, and commercially important, problem to solve. Errors must be detected and corrected at high speed, and the classifier must be very accurate; ideally it should also be tunable to the characteristics of individual communication links. We show that simple single layer neural networks may be used to address these problems, and examine how different input representations affect the accuracy of bit error correction. Our results lead us to conclude that a system based on these principles can perform at least as well as an existing non-trainable error correction system, whilst being tunable to suit the individual characteristics of different communication links.
Resumo:
Insights from the stream of research on knowledge calibration, which refers to the correspondence between accuracy and confidence in knowledge, enable a better understanding of consequences of inaccurate perceptions of managers. This paper examines the consequences of inaccurate managerial knowledge through the lens of knowledge calibration. Specifically, the paper examines the antecedent role of miscalibration of knowledge in strategy formation. It is postulated that miscalibrated managers who overestimate external factors and display a high level of confidence in their estimates are likely to enact strategies that are relatively more evolutionary and incremental in nature, whereas miscalibrated managers who overestimate internal factors and display a high level of confidence in their estimates are likely to enact strategies that are relatively more discontinuous and disruptive in nature. Perspectives from social cognitive theory provide support for the underlying processes. The paper, in part, explains the paradox of the prevalence of inaccurate managerial perceptions and efficacious performance. It also advances the literature on strategy formation through the application of the construct of knowledge calibration.