949 resultados para k-Error linear complexity
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
We analyse Gallager codes by employing a simple mean-field approximation that distorts the model geometry and preserves important interactions between sites. The method naturally recovers the probability propagation decoding algorithm as a minimization of a proper free-energy. We find a thermodynamical phase transition that coincides with information theoretical upper-bounds and explain the practical code performance in terms of the free-energy landscape.
Resumo:
This paper examines the relationship between multinationality and firm performance. The analysis is based on a sample of over 400 UK multinationals, and encompasses both service sector and manufacturing sector multinationals. This paper confirms the non-linear relationship between performance and multinationality that is reported elsewhere in the literature, but offers further analysis of this relationship. Specifically, by correcting for endogeneity in the investment decision, and for shocks in productivity across countries, the paper demonstrates that the returns to multinationality are greater than those that have been reported elsewhere, and persist to higher degrees of international diversification.
Resumo:
The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.
Resumo:
The tribology of linear tape storage system including Linear Tape Open (LTO) and Travan5 was investigated by combining X-ray Photoelectron Spectroscopy (XPS), Auger Electron Spectroscopy (AES), Optical Microscopy and Atomic Force Microscopy (AFM) technologies. The purpose of this study was to understand the tribology mechanism of linear tape systems then projected recording densities may be achieved in future systems. Water vapour pressure or Normalized Water Content (NWC) rather than the Relative Humidity (RH) values (as are used almost universally in this field) determined the extent of PTR and stain (if produced) in linear heads. Approximately linear dependencies were found for saturated PTR increasing with normalized water content increasing over the range studied using the same tape. Fe Stain (if produced) preferentially formed on the head surfaces at the lower water contents. The stain formation mechanism had been identified. Adhesive bond formation is a chemical process that is governed by temperature. Thus the higher the contact pressure, the higher the contact temperature in the interface of head and tape, was produced higher the probability of adhesive bond formation and the greater the amount of transferred material (stain). Water molecules at the interface saturate the surface bonds and makes adhesive junctions less likely. Tape polymeric binder formulation also has a significant role in stain formation, with the latest generation binders producing less transfer of material. This is almost certainly due to higher cohesive bonds within the body of the magnetic layer. TiC in the two-phase ceramic tape-bearing surface (AlTiC) was found to oxidise to form TiO2.The oxidation rate of TiC increased with water content increasing. The oxide was less dense than the underlying carbide; hence the interface between TiO2 oxide and TiC was stressed. Removals of the oxide phase results in the formation of three-body abrasive particles that were swept across the tape head, and gave rise to three-body abrasive wear, particularly in the pole regions. Hence, PTR and subsequent which signal loss and error growth. The lower contact pressure of the LTO system comparing with the Travan5 system ensures that fewer and smaller three-body abrasive particles were swept across the poles and insulator regions. Hence, lower contact pressure, as well as reducing stain in the same time significantly reduces PTR in the LTO system.
Resumo:
The matched filter detector is well known as the optimum detector for use in communication, as well as in radar systems for signals corrupted by Additive White Gaussian Noise (A.W.G.N.). Non-coherent F.S.K. and differentially coherent P.S.K. (D.P.S.K.) detection schemes, which employ a new approach in realizing the matched filter processor, are investigated. The new approach utilizes pulse compression techniques, well known in radar systems, to facilitate the implementation of the matched filter in the form of the Pulse Compressor Matched Filter (P.C.M.F.). Both detection schemes feature a mixer- P.C.M.F. Compound as their predetector processor. The Compound is utilized to convert F.S.K. modulation into pulse position modulation, and P.S.K. modulation into pulse polarity modulation. The mechanisms of both detection schemes are studied through examining the properties of the Autocorrelation function (A.C.F.) at the output of the P.C.M.F.. The effects produced by time delay, and carrier interference on the output A.C.F. are determined. Work related to the F.S.K. detection scheme is mostly confined to verifying its validity, whereas the D.P.S.K. detection scheme has not been reported before. Consequently, an experimental system was constructed, which utilized combined hardware and software, and operated under the supervision of a microprocessor system. The experimental system was used to develop error-rate models for both detection schemes under investigation. Performances of both F. S. K. and D.P. S. K. detection schemes were established in the presence of A. W. G. N. , practical imperfections, time delay, and carrier interference. The results highlight the candidacy of both detection schemes for use in the field of digital data communication and, in particular, the D.P.S.K. detection scheme, which performed very close to optimum in a background of A.W.G.N.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.
Resumo:
The purpose of this study is to develop econometric models to better understand the economic factors affecting inbound tourist flows from each of six origin countries that contribute to Hong Kong’s international tourism demand. To this end, we test alternative cointegration and error correction approaches to examine the economic determinants of tourist flows to Hong Kong, and to produce accurate econometric forecasts of inbound tourism demand. Our empirical findings show that permanent income is the most significant determinant of tourism demand in all models. The variables of own price, weighted substitute prices, trade volume, the share price index (as an indicator of changes in wealth in origin countries), and a dummy variable representing the Beijing incident (1989) are also found to be important determinants for some origin countries. The average long-run income and own price elasticity was measured at 2.66 and – 1.02, respectively. It was hypothesised that permanent income is a better explanatory variable of long-haul tourism demand than current income. A novel approach (grid search process) has been used to empirically derive the weights to be attached to the lagged income variable for estimating permanent income. The results indicate that permanent income, estimated with empirically determined relatively small weighting factors, was capable of producing better results than the current income variable in explaining long-haul tourism demand. This finding suggests that the use of current income in previous empirical tourism demand studies may have produced inaccurate results. The share price index, as a measure of wealth, was also found to be significant in two models. Studies of tourism demand rarely include wealth as an explanatory forecasting long-haul tourism demand. However, finding a satisfactory proxy for wealth common to different countries is problematic. This study indicates with the ECM (Error Correction Models) based on the Engle-Granger (1987) approach produce more accurate forecasts than ECM based on Pesaran and Shin (1998) and Johansen (1988, 1991, 1995) approaches for all of the long-haul markets and Japan. Overall, ECM produce better forecasts than the OLS, ARIMA and NAÏVE models, indicating the superiority of the application of a cointegration approach for tourism demand forecasting. The results show that permanent income is the most important explanatory variable for tourism demand from all countries but there are substantial variations between countries with the long-run elasticity ranging between 1.1 for the U.S. and 5.3 for U.K. Price is the next most important variable with the long-run elasticities ranging between -0.8 for Japan and -1.3 for Germany and short-run elasticities ranging between – 0.14 for Germany and -0.7 for Taiwan. The fastest growing market is Mainland China. The findings have implications for policies and strategies on investment, marketing promotion and pricing.
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.
Resumo:
The thesis is concerned with the electron properties of single-polepiece magnetic electron lenses especially under conditions of extreme polepiece saturation. The electron optical properties are first analysed under conditions of high polepiece permeability. From this analysis, a general idea can be obtained of the important parameters that affect ultimate lens performance. In addition, useful information is obtained concerning the design of improved lenses operating under conditions of extreme polepiece saturation, for example at flux densities of the order of 10 Tesla. It is shown that in a single-polepiece lens , the position and shape of the lens exciting coil plays an important role. In particular, the maximum permissible current density in the windings,rather than the properties of the iron, can set a limit to lens performance. This factor was therefore investigated in some detail. The axial field distribution of a single-polepiece lens, unlike that of a conventional lens, is highly asymmetrical. There are therefore two possible physical arrangements of the lens with respect to the incoming electron beam. In general these two orientations will result in different aberration coefficients. This feature has also been investigated in some detail. Single-pole piece lenses are thus considerably more complicated electron- optically than conventional double polepiece lenses. In particular, the absence of the usual second polepiece causes most of the axial magnetic flux density distribution to lie outside the body of the lens. This can have many advantages in electron microscopy but it creates problems in calculating the magnetic field distribution. In particular, presently available computer programs are liable to be considerably in error when applied to such structures. It was therefore necessary to find independent ways of checking the field calculations. Furthermore, if the polepiece is allowed to saturate, much more calculation is involved since the field distribution becomes a non-linear function of the lens excitation. In searching for optimum lens designs, care was therefore taken to ensure that the coil was placed in the optimum position. If this condition is satisfied there seems to be no theoretical limit to the maximum flux density that can be attained at the polepiece tip. However , under iron saturation condition, some broadening of the axial field distribution will take place, thereby changing the lens aberrations . Extensive calculations were therefore made to find the minimum spherical and chromatic aberration coefficients . The focal properties of such lens designs are presented and compared with the best conventional double-polepiece lenses presently available.
Resumo:
Medication errors are associated with significant morbidity and people with mental health problems may be particularly susceptible to medication errors due to various factors. Primary care has a key role in improving medication safety in this vulnerable population. The complexity of services, involving primary and secondary care and social services, and potential training issues may increase error rates, with physical medicines representing a particular risk. Service users may be cognitively impaired and fail to identify an error placing additional responsibilities on clinicians. The potential role of carers in error prevention and medication safety requires further elaboration. A potential lack of trust between service users and clinicians may impair honest communication about medication issues leading to errors. There is a need for detailed research within this field.
Resumo:
Three novel solar thermal collector concepts derived from the Linear Fresnel Reflector (LFR) are developed and evaluated through a multi-criteria decision-making methodology, comprising the following techniques: Quality Function Deployment (QFD), the Analytical Hierarchy Process (AHP) and the Pugh selection matrix. Criteria are specified by technical and customer requirements gathered from Gujarat, India. The concepts are compared to a standard LFR for reference, and as a result, a novel 'Elevation Linear Fresnel Reflector' (ELFR) concept using elevating mirrors is selected. A detailed version of this concept is proposed and compared against two standard LFR configurations, one using constant and the other using variable horizontal mirror spacing. Annual performance is analysed for a typical meteorological year. Financial assessment is made through the construction of a prototype. The novel LFR has an annual optical efficiency of 49% and increases exergy by 13-23%. Operational hours above a target temperature of 300 C are increased by 9-24%. A 17% reduction in land usage is also achievable. However, the ELFR suffers from additional complexity and a 16-28% increase in capital cost. It is concluded that this novel design is particularly promising for industrial applications and locations with restricted land availability or high land costs. The decision analysis methodology adopted is considered to have a wider potential for applications in the fields of renewable energy and sustainable design. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Few-mode fiber transmission systems are typically impaired by mode-dependent loss (MDL). In an MDL-impaired link, maximum-likelihood (ML) detection yields a significant advantage in system performance compared to linear equalizers, such as zero-forcing and minimum-mean square error equalizers. However, the computational effort of the ML detection increases exponentially with the number of modes and the cardinality of the constellation. We present two methods that allow for near-ML performance without being afflicted with the enormous computational complexity of ML detection: improved reduced-search ML detection and sphere decoding. Both algorithms are tested regarding their performance and computational complexity in simulations of three and six spatial modes with QPSK and 16QAM constellations.