958 resultados para Check-In
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
We review recent theoretical progress on the statistical mechanics of error correcting codes, focusing on low-density parity-check (LDPC) codes in general, and on Gallager and MacKay-Neal codes in particular. By exploiting the relation between LDPC codes and Ising spin systems with multispin interactions, one can carry out a statistical mechanics based analysis that determines the practical and theoretical limitations of various code constructions, corresponding to dynamical and thermodynamical transitions, respectively, as well as the behaviour of error-exponents averaged over the corresponding code ensemble as a function of channel noise. We also contrast the results obtained using methods of statistical mechanics with those derived in the information theory literature, and show how these methods can be generalized to include other channel types and related communication problems.
Resumo:
We obtain phase diagrams of regular and irregular finite-connectivity spin glasses. Contact is first established between properties of the phase diagram and the performance of low-density parity check (LDPC) codes within the replica symmetric (RS) ansatz. We then study the location of the dynamical and critical transition points of these systems within the one step replica symmetry breaking theory (RSB), extending similar calculations that have been performed in the past for the Bethe spin-glass problem. We observe that the location of the dynamical transition line does change within the RSB theory, in comparison with the results obtained in the RS case. For LDPC decoding of messages transmitted over the binary erasure channel we find, at zero temperature and rate R=14, an RS critical transition point at pc 0.67 while the critical RSB transition point is located at pc 0.7450±0.0050, to be compared with the corresponding Shannon bound 1-R. For the binary symmetric channel we show that the low temperature reentrant behavior of the dynamical transition line, observed within the RS ansatz, changes its location when the RSB ansatz is employed; the dynamical transition point occurs at higher values of the channel noise. Possible practical implications to improve the performance of the state-of-the-art error correcting codes are discussed. © 2006 The American Physical Society.
Resumo:
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Resumo:
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases. © 2007 IOP Publishing Ltd.
Resumo:
The typical behavior of the relay-without-delay channel under low-density parity-check coding and its multiple-unit generalization, termed the relay array, is studied using methods of statistical mechanics. A demodulate-and- forward strategy is analytically solved using the replica symmetric ansatz which is exact in the system studied at Nishimori's temperature. In particular, the typical level of improvement in communication performance by relaying messages is shown in the case of a small and a large number of relay units. © 2007 The American Physical Society.
Influence of check and field size on the visual evoked magnetic response to a pattern shift stimulus
Resumo:
A decrease in the check size of a pattern shift stimulus increases the latency and amplitude of the visual evoked potential (VEP) P100. In addition, for a given check size, decreasing the size of the stimulus field increases the latency and amplitude of the P100. These results imply that the central regions of the retina make a significant contribution to the generation of the electrical P100. However, the corresponding magnetic P100m may have a different origin. We have studied the effects of check and field size on the P100m in five normal subjects using a DC-Squid, second-order gradiometer. Magnetic responses were recorded at the positive maximum of the P100m over the occipital scalp to six check sizes (10-100') presented in a large (13 degrees 34') and small (5 degrees 14') field and to a large check (100') presented in seven field sizes (1 degree 45' - 15 degrees 10'). No responses were recorded to any check size with a small field. Decreasing the check size presented in a large field increased latency of the P100m by approx. 30 ms while the amplitude of the response decreased with the largest reduction occurring between 70' and 12' checks. Using a large check, latency increased and amplitude decreased as the field size was reduced. The latency changes in response to check and field size were similar to those described for the VEP although the magnitudes of the magnetic changes were greater. Unlike the VEP, amplitude responses were maximal when large checks were presented in a large stimulus field. This suggests that regions outside the central retina make a more significant contribution to the visual evoked magnetic response than they do to the VEP, and that the P100m may be useful clinically in the study of diseases that affect the more peripheral regions of the retina.
Resumo:
1. Fitting a linear regression to data provides much more information about the relationship between two variables than a simple correlation test. A goodness of fit test of the line should always be carried out. Hence, r squared estimates the strength of the relationship between Y and X, ANOVA whether a statistically significant line is present, and the ‘t’ test whether the slope of the line is significantly different from zero. 2. Always check whether the data collected fit the assumptions for regression analysis and, if not, whether a transformation of the Y and/or X variables is necessary. 3. If the regression line is to be used for prediction, it is important to determine whether the prediction involves an individual y value or a mean. Care should be taken if predictions are made close to the extremities of the data and are subject to considerable error if x falls beyond the range of the data. Multiple predictions require correction of the P values. 3. If several individual regression lines have been calculated from a number of similar sets of data, consider whether they should be combined to form a single regression line. 4. If the data exhibit a degree of curvature, then fitting a higher-order polynomial curve may provide a better fit than a straight line. In this case, a test of whether the data depart significantly from a linear regression should be carried out.
Resumo:
The absence of a definitive approach to the design of manufacturing systems signifies the importance of a control mechanism to ensure the timely application of relevant design techniques. To provide effective control, design development needs to be continually assessed in relation to the required system performance, which can only be achieved analytically through computer simulation. The technique providing the only method of accurately replicating the highly complex and dynamic interrelationships inherent within manufacturing facilities and realistically predicting system behaviour. Owing to the unique capabilities of computer simulation, its application should support and encourage a thorough investigation of all alternative designs. Allowing attention to focus specifically on critical design areas and enabling continuous assessment of system evolution. To achieve this system analysis needs to efficient, in terms of data requirements and both speed and accuracy of evaluation. To provide an effective control mechanism a hierarchical or multi-level modelling procedure has therefore been developed, specifying the appropriate degree of evaluation support necessary at each phase of design. An underlying assumption of the proposal being that evaluation is quick, easy and allows models to expand in line with design developments. However, current approaches to computer simulation are totally inappropriate to support the hierarchical evaluation. Implementation of computer simulation through traditional approaches is typically characterized by a requirement for very specialist expertise, a lengthy model development phase, and a correspondingly high expenditure. Resulting in very little and rather inappropriate use of the technique. Simulation, when used, is generally only applied to check or verify a final design proposal. Rarely is the full potential of computer simulation utilized to aid, support or complement the manufacturing system design procedure. To implement the proposed modelling procedure therefore the concept of a generic simulator was adopted, as such systems require no specialist expertise, instead facilitating quick and easy model creation, execution and modification, through simple data inputs. Previously generic simulators have tended to be too restricted, lacking the necessary flexibility to be generally applicable to manufacturing systems. Development of the ATOMS manufacturing simulator, however, has proven that such systems can be relevant to a wide range of applications, besides verifying the benefits of multi-level modelling.
Resumo:
This study is a consumer-survey conducted with former Marriage Guidance Council clients. The objectives were to identify and examine why they chose the agency, what their expectations and experiences were of marital counselling and whether anything was achieved. The material was derived from tape recorded interviews with 51 former M.G. clients (17 men and 34 women) from 42 marriages and with 21 counsellors; data from written material and a card-sort completed by the research sample; and the case record sheets of the research population (174 cases). The results from the written data of clients showed that 49% were satisfied with counselling, 25.5% were satisfied in some ways but not in others, and 25.5% were dissatisfied. Forty-six percent rated they had benefited from counselling, either a great deal or to some degree, 4% were neutral and 50% recorded they had not benefited. However the counsellors' assessments were more optimistic. It was also ascertained that 50% of the research sample eventually separated or divorced subsequent to counselling. A cross-check revealed that the majority who rated they were satisfied with counselling were those who remained married, whilst dissatisfied clients were the ones who unwillingly separated or divorced. The study then describes, discusses and assesses the experiences of clients in the light of these findings on a number of dimensions. From this it was possible to construct a summary profile of a "successful" client describing the features which would contribute to "success". Two key themes emerged from the data. (1) the discrepancy between clients expectations and the counselling offered, which included mis match over the aims and methods of counselling, and problem definition; and (2) the importance of the client/counsellor relationship. The various implications for the agency are then discussed which include recommendations on policy, the training of counsellors and further research.
Resumo:
Several axi-symmetric EN3B steel components differing in shape and size were forged on a 100 ton joint knuckle press. A load cell fitted under the lower die inserts recorded the total deformation forces. Job parameters were measured off the billets and the forged parts. Slug temperatures were varied and two lubricants - aqueous colloidal graphite and oil - were used. An industrial study was also conducted to check the results of the laboratory experiments. Loads were measured (with calibrated extensometers attached to the press frames) when adequately heated mild steel slugs were being forged in finishing dies. Geometric parameters relating to the jobs and the dies were obtained from works drawings. All the variables considered in the laboratory study could not, however, be investigated without disrupting production. In spite of this obvious limitation, the study confirmed that parting area is the most significant geometric factor influencing the forging load. Multiple regression analyses of the laboratory and industrial results showed that die loads increase significantly with the weights and parting areas of press forged components, and with the width to thickness ratios of the flashes formed, but diminish with increasing slug temperatures and higher billet diameter to height ratios. The analyses also showed that more complicated parts require greater loads to forge them. Die stresses, due to applied axial loads, were investigated by the photoelastic method. The three dimensional frozen stress technique was employed. Model dies were machined from cast araldite cylinders, and the slug material was simulated with plasticene. Test samples were cut from the centres of the dies after the stress freezing. Examination of the samples, and subsequent calculations, showed that the highest stresses were developed in die outer corners. This observation partly explains why corner cracking occurs frequently in industrial forging dies. Investigation of die contact during the forging operation revealed the development of very high stresses.
Resumo:
The Visually Evoked Subcortical Potential, a far-field signal, was originally defined to flash stimulation as a triphasic positive-negative-positive complex with mean latencies of P21 N26.2 P33.6 (Harding and Rubinstein 1980). Inconsistent with its subcortical source however, the signal was found to be tightly localised to the mastoid. This thesis re-examines the earlier protocols using flash stimulation and with auditory masking establishes by topographic studies that the VESP has a widespread scalp distribution, consistent with a far-field source of the signal, and is not a volume-conducted electroretinogram (ERG). Furthermore, mastoid localisation indicates auditory contamination from the click, on discharge of the photostimulator. The use of flash stimulation could not precisely identify the origin of the response. Possible sources of the VESP are the lateral geniculate body (LGB) and the superior colliculus. The LGB received 80% of the nerve fibres from the retina, and responds to high contrast achromatic stimulation in the form of drifting gratings of high spatial frequencies. At low spatial frequencies, it is more sensitive to colour. The superior colliculus is insensitive to colour and suppressed by contrast and responds to transitory rapid movements, and receives about 20% of the optic nerve fibres. A pattern VESP was obtained to black and white checks as a P23.5 N29.2 P34 complex in 93% of normal subjects at an optimal check size of 12'. It was also present as a P23.0 N28.29 P32.23 complex to red and green luminance balanced checks at 2o check size in 73% of subjects. These results were not volume-conducted pattern electroretinogram responses. These findings are consistent with the spatial frequency properties of the lateral geniculate body which is the considered source of the signal. With further work, the VESP may supplement electrodiagnosis of post-chiasmal lesions.
Resumo:
Purpose – In the UK, while fashion apparel purchasing is available to the majority of consumers, the main supermarkets seem – rather against the odds and market conventions – to have created a new, socially-acceptable and legitimate, apparel market offer for young children. This study aims to explore parental purchasing decisions on apparel for young children (below ten years old) focusing on supermarket diversification into apparel and consumer resistance against other traditional brands. Design/methodology/approach – Data collection adopted a qualitative research mode: using semi-structured interviews in two locations (Cornwall Please correct and check againand Glasgow), each with a Tesco and ASDA located outside towns. A total of 59 parents participated in the study. Interviews took place in the stores, with parents seen buying children fashion apparel. Findings – The findings suggest that decisions are based not only on functionality (e.g. convenience, value for money, refund policy), but also on intuitive factors (e.g. style, image, quality) as well as broader processes of consumption from parental boundary setting (e.g. curbing premature adultness). Positive consumer resistance is leading to a re-drawing of the cultural boundaries of fashion. In some cases, concerns are expressed regarding items that seem too adult-like or otherwise not as children's apparel should be. Practical implications – The paper highlights the increasing importance of browsing as a modern choice practice (e.g. planned impulse buying, sanctuary of social activity). Particular attention is given to explaining why consumers positively resist buying from traditional label providers and voluntarily choose supermarket clothing ranges without any concerns over their children wearing such garments. Originality/value – The paper shows that supermarket shopping for children's apparel is now firmly part of UK consumption habits and choice. The findings provide theoretical insights into the significance of challenging market conventions, parental cultural boundary setting and positive resistance behaviour.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.