911 resultados para Cipher and telegraph codes
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
A new fast stream cipher, MAJE4 is designed and developed with a variable key size of 128-bit or 256-bit. The randomness property of the stream cipher is analysed by using the statistical tests. The performance evaluation of the stream cipher is done in comparison with another fast stream cipher called JEROBOAM. The focus is to generate a long unpredictable key stream with better performance, which can be used for cryptographic applications.
Resumo:
A primary medium for the human beings to communicate through language is Speech. Automatic Speech Recognition is wide spread today. Recognizing single digits is vital to a number of applications such as voice dialling of telephone numbers, automatic data entry, credit card entry, PIN (personal identification number) entry, entry of access codes for transactions, etc. In this paper we present a comparative study of SVM (Support Vector Machine) and HMM (Hidden Markov Model) to recognize and identify the digits used in Malayalam speech.
Resumo:
In previous work (Olshausen & Field 1996), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., wavelet-like). This note shows how the algorithm may be interpreted within a maximum-likelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed.
Resumo:
Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.
Resumo:
In this paper we introduce a financial market model based on continuos time random motions with alternanting constant velocities and with jumps ocurring when the velocity switches. if jump directions are in the certain corresondence with the velocity directions of the underlyng random motion with respect to the interest rate, the model is free of arbitrage. The replicating strategies for options are constructed in details. Closed form formulas for the opcion prices are obtained.
Resumo:
Resumen tomado de la publicación. Con el apoyo económico del departamento MIDE de la UNED
Resumo:
Seven groups have participated in an intercomparison study of calculations of radiative forcing (RF) due to stratospheric water vapour (SWV) and contrails. A combination of detailed radiative transfer schemes and codes for global-scale calculations have been used, as well as a combination of idealized simulations and more realistic global-scale changes in stratospheric water vapour and contrails. Detailed line-by-line codes agree within about 15 % for longwave (LW) and shortwave (SW) RF, except in one case where the difference is 30 %. Since the LW and SW RF due to contrails and SWV changes are of opposite sign, the differences between the models seen in the individual LW and SW components can be either compensated or strengthened in the net RF, and thus in relative terms uncertainties are much larger for the net RF. Some of the models used for global-scale simulations of changes in SWV and contrails differ substantially in RF from the more detailed radiative transfer schemes. For the global-scale calculations we use a method of weighting the results to calculate a best estimate based on their performance compared to the more detailed radiative transfer schemes in the idealized simulations.
Resumo:
Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.
Resumo:
Urban regeneration programmes in the UK over the past 20 years have increasingly focused on attracting investors, middle-class shoppers and visitors by transforming places and creating new consumption spaces. Ensuring that places are safe and are seen to be safe has taken on greater salience as these flows of income are easily disrupted by changing perceptions of fear and the threat of crime. At the same time, new technologies and policing strategies and tactics have been adopted in a number of regeneration areas which seek to establish control over these new urban spaces. Policing space is increasingly about controlling human actions through design, surveillance technologies and codes of conduct and enforcement. Regeneration agencies and the police now work in partnerships to develop their strategies. At its most extreme, this can lead to the creation of zero-tolerance, or what Smith terms 'revanchist', measures aimed at particular social groups in an effort to sanitise space in the interests of capital accumulation. This paper, drawing on an examination of regeneration practices and processes in one of the UK's fastest-growing urban areas, Reading in Berkshire, assesses policing strategies and tactics in the wake of a major regeneration programme. It documents and discusses the discourses of regeneration that have developed in the town and the ways in which new urban spaces have been secured. It argues that, whilst security concerns have become embedded in institutional discourses and practices, the implementation of security measures has been mediated, in part, by the local socio-political relations in and through which they have been developed.
Resumo:
Three experiments examined transfer across form (words/pictures) and modality (visual/ auditory) in written word, auditory word, and pictorial implicit memory tests, as well as on a free recall task. Experiment 1 showed no significant transfer across form on any of the three implicit memory tests,and an asymmetric pattern of transfer across modality. In contrast, the free recall results revealed a very different picture. Experiment 2 further investigated the asymmetric modality effects obtained for the implicit memory measures by employing articulatory suppression and picture naming to control the generation of phonological codes. Finally, Experiment 3 examined the effects of overt word naming and covert picture labelling on transfer between study and test form. The results of the experiments are discussed in relation to Tulving and Schacter's (1990) Perceptual Representation Systems framework and Roediger's (1990) Transfer Appropriate Processing theory.
Resumo:
Aerosols and their precursors are emitted abundantly by transport activities. Transportation constitutes one of the fastest growing activities and its growth is predicted to increase significantly in the future. Previous studies have estimated the aerosol direct radiative forcing from one transport sub-sector, but only one study to our knowledge estimated the range of radiative forcing from the main aerosol components (sulphate, black carbon (BC) and organic carbon) for the whole transportation sector. In this study, we compare results from two different chemical transport models and three radiation codes under different hypothesis of mixing: internal and external mixing using emission inventories for the year 2000. The main results from this study consist of a positive direct radiative forcing for aerosols emitted by road traffic of +20±11 mW m−2 for an externally mixed aerosol, and of +32±13 mW m−2 when BC is internally mixed. These direct radiative forcings are much higher than the previously published estimate of +3±11 mW m−2. For transport activities from shipping, the net direct aerosol radiative forcing is negative. This forcing is dominated by the contribution of the sulphate. For both an external and an internal mixture, the radiative forcing from shipping is estimated at −26±4 mW m−2. These estimates are in very good agreement with the range of a previously published one (from −46 to −13 mW m−2) but with a much narrower range. By contrast, the direct aerosol forcing from aviation is estimated to be small, and in the range −0.9 to +0.3 mW m−2.
Resumo:
The evolutionary history of P. vulgaris is important to those working on its genetic resources, but is not reflected in its infraspecific taxonomy. Genetic isolation of wild populations between and also within Middle and South America has resulted in morphological and molecular differentiation. Populations from northern and southern ends of the range are assigned to different gene pools, though intermediates occur in intervening areas. Chloroplast haplotypes suggest three distinct lineages of wild beans and several intercontinental dispersals. The species was domesticated independently in both Middle and South America, probably several times in Middle America. This, together with further differentiation under human selection, has produced distinct races among domesticated beans. The informal categories of wild versus domesticated, gene pool, and race convey the evolutionary picture more clearly than the formal categories provided by the Codes of Nomenclature for wild or cultivated plants.