34 resultados para Type of error
em Aston University Research Archive
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Resumo:
The performance of "typical set (pairs) decoding" for ensembles of Gallager's linear code is investigated using statistical physics. In this decoding method, errors occur, either when the information transmission is corrupted by atypical noise, or when multiple typical sequences satisfy the parity check equation as provided by the received corrupted codeword. We show that the average error rate for the second type of error over a given code ensemble can be accurately evaluated using the replica method, including the sensitivity to message length. Our approach generally improves the existing analysis known in the information theory community, which was recently reintroduced in IEEE Trans. Inf. Theory 45, 399 (1999), and is believed to be the most accurate to date. © 2002 The American Physical Society.
Resumo:
Three experiments investigated the effect of consensus information on majority and minority influence. Experiment 1 examined the effect of consensus expressed by descriptive adjectives (large vs. small) on social influence. A large source resulted in more influence than a small source, irrespective of source status (majority vs. minority). Experiment 2 showed that large sources affected attitudes heuristically, whereas only a small minority instigated systematic processing of the message. Experiment 3 manipulated the type of consensus information, either in terms of descriptive adjectives (large, small) or percentages (82%, 18%, 52%, 48%). When consensus was expressed in terms of descriptive adjectives, the findings of Experiments 1 and 2 were replicated (large sources were more influential than small sources), but when consensus was expressed in terms of percentages, the majority was more influential than the minority, irrespective of group consensus.
Resumo:
Problem: The vast majority of research examining the interplay between aggressive emotions, beliefs, behaviors, cognitions, and situational contingencies in competitive athletes has focused on Western populations and only select sports (e.g., ice hockey). Research involving Eastern, particularly Chinese, athletes is surprisingly sparse given the sheer size of these populations. Thus, this study examines the aggressive emotions, beliefs, behaviors, and cognitions, of competitive Chinese athletes. Method: Several measures related to aggression were distributed to a large sample (N ¼ 471) of male athletes, representing four sports (basketball, rugby union, association football/soccer, and squash). Results: Higher levels of anger and aggression tended to be associated with higher levels of play for rugby and low levels of play for contact (e.g., football, basketball) and individual sports (e.g., squash). Conclusions: The results suggest that the experience of angry emotions and aggressive behaviors of Chinese athletes are similar to Western populations, but that sport psychology practitioners should be aware of some potentially important differences, such as the general tendency of Chinese athletes to disapprove of aggressive behavior.
Resumo:
Dementia with Lewy bodies (DLB) (also known as Lewy body dementia or diffuse Lewy body disease) is now recognised as the second most common type of dementia after Alzheimer's disease and may account for up to a quarter of all cases in elderly perople. This article decsribes the general symptoms of DLB and the visual symptoms that have been reported in the disorder.
Resumo:
In this thesis we use statistical physics techniques to study the typical performance of four families of error-correcting codes based on very sparse linear transformations: Sourlas codes, Gallager codes, MacKay-Neal codes and Kanter-Saad codes. We map the decoding problem onto an Ising spin system with many-spins interactions. We then employ the replica method to calculate averages over the quenched disorder represented by the code constructions, the arbitrary messages and the random noise vectors. We find, as the noise level increases, a phase transition between successful decoding and failure phases. This phase transition coincides with upper bounds derived in the information theory literature in most of the cases. We connect the practical decoding algorithm known as probability propagation with the task of finding local minima of the related Bethe free-energy. We show that the practical decoding thresholds correspond to noise levels where suboptimal minima of the free-energy emerge. Simulations of practical decoding scenarios using probability propagation agree with theoretical predictions of the replica symmetric theory. The typical performance predicted by the thermodynamic phase transitions is shown to be attainable in computation times that grow exponentially with the system size. We use the insights obtained to design a method to calculate the performance and optimise parameters of the high performance codes proposed by Kanter and Saad.
Resumo:
The mechanism behind the immunostimulatory effect of the cationic liposomal vaccine adjuvant dimethyldioctadecylammonium and trehalose 6,6′- dibehenate (DDA:TDB) has been linked to the ability of these cationic vesicles to promote a depot after administration, with the liposomal adjuvant and the antigen both being retained at the injection site. This can be attributed to their cationic nature, since reduction in vesicle size does not influence their distribution profile yet neutral or anionic liposomes have more rapid clearance rates. Therefore the aim of this study was to investigate the impact of a combination of reduced vesicle size and surface pegylation on the biodistribution and adjuvanticity of the formulations, in a bid to further manipulate the pharmacokinetic profiles of these adjuvants. From the biodistribution studies, it was found that with small unilamellar vesicles (SUVs), 10% PEGylation of the formulation could influence liposome retention at the injection site after 4 days, whilst higher levels (25 mol%) of PEG blocked the formation of a depot and promote clearance to the draining lymph nodes. Interestingly, whilst the use of 10% PEG in the small unilamellar vesicles did not block the formation of a depot at the site of injection, it did result in earlier antibody response rates and switch the type of T cell responses from a Th1 to a Th2 bias suggesting that the presence of PEG in the formulation not only control the biodistribution of the vaccine, but also results in different types of interactions with innate immune cells. © 2012 Elsevier B.V.
Resumo:
Introduction: There is a growing public perception that serious medical error is commonplace and largely tolerated by the medical profession. The Government and medical establishment's response to this perceived epidemic of error has included tighter controls over practising doctors and individual stick-and-carrot reforms of medical practice. Discussion: This paper critically reviews the literature on medical error, professional socialization and medical student education, and suggests that common themes such as uncertainty, necessary fallibility, exclusivity of professional judgement and extensive use of medical networks find their genesis, in part, in aspects of medical education and socialization into medicine. The nature and comparative failure of recent reforms of medical practice and the tension between the individualistic nature of the reforms and the collegiate nature of the medical profession are discussed. Conclusion: A more theoretically informed and longitudinal approach to decreasing medical error might be to address the genesis of medical thinking about error through reforms to the aspects of medical education and professional socialization that help to create and perpetuate the existence of avoidable error, and reinforce medical collusion concerning error. Further changes in the curriculum to emphasize team working, communication skills, evidence-based practice and strategies for managing uncertainty are therefore potentially key components in helping tomorrow's doctors to discuss, cope with and commit fewer medical errors.
Resumo:
Electrocardiography (ECG) has been recently proposed as biometric trait for identification purposes. Intra-individual variations of ECG might affect identification performance. These variations are mainly due to Heart Rate Variability (HRV). In particular, HRV causes changes in the QT intervals along the ECG waveforms. This work is aimed at analysing the influence of seven QT interval correction methods (based on population models) on the performance of ECG-fiducial-based identification systems. In addition, we have also considered the influence of training set size, classifier, classifier ensemble as well as the number of consecutive heartbeats in a majority voting scheme. The ECG signals used in this study were collected from thirty-nine subjects within the Physionet open access database. Public domain software was used for fiducial points detection. Results suggested that QT correction is indeed required to improve the performance. However, there is no clear choice among the seven explored approaches for QT correction (identification rate between 0.97 and 0.99). MultiLayer Perceptron and Support Vector Machine seemed to have better generalization capabilities, in terms of classification performance, with respect to Decision Tree-based classifiers. No such strong influence of the training-set size and the number of consecutive heartbeats has been observed on the majority voting scheme.
Resumo:
A spatial object consists of data assigned to points in a space. Spatial objects, such as memory states and three dimensional graphical scenes, are diverse and ubiquitous in computing. We develop a general theory of spatial objects by modelling abstract data types of spatial objects as topological algebras of functions. One useful algebra is that of continuous functions, with operations derived from operations on space and data, and equipped with the compact-open topology. Terms are used as abstract syntax for defining spatial objects and conditional equational specifications are used for reasoning. We pose a completeness problem: Given a selection of operations on spatial objects, do the terms approximate all the spatial objects to arbitrary accuracy? We give some general methods for solving the problem and consider their application to spatial objects with real number attributes. © 2011 British Computer Society.
Resumo:
We study novel side-emitting modes in VCSEL microcavities. These modes correspond to π-shaped propagation along the mesa diameter, reflection from angled mesa walls and bottom Bragg reflector. We believe this study of π-modes is important for optimization of VCSEL design for improvement of efficiency.
Resumo:
BACKGROUND: The impact of different levels of depression severity on quality of life (QoL) is not well studied, particularly regarding ICD-10 criteria. The ICD classification of depressive episodes in three levels of severity is also controversial and the less severe category, mild, has been considered as unnecessary and not clearly distinguishable from non-clinical states. The present work aimed to test the relationship between depression severity according to ICD-10 criteria and several dimensions of functioning as assessed by Medical Outcome Study (MOS) 36-item Short Form general health survey (SF-36) at the population level. METHOD: A sample of 551 participants from the second phase of the Outcome of Depression International Network (ODIN) study (228 controls without depression and 313 persons fulfilling ICD criteria for depressive episode) was selected for a further assessment of several variables, including QoL related to physical and mental health as measured with the SF-36. RESULTS: Statistically significant differences between controls and the depression group were found in both physical and mental markers of health, regardless of the level of depression severity; however, there were very few differences in QoL between levels of depression as defined by ICD-10. Regardless of the presence of depression, disability, widowed status, being a woman and older age were associated with worse QoL in a structural equation analysis with covariates. Likewise, there were no differences according to the type of depression (single-episode versus recurrent). CONCLUSIONS: These results cast doubt on the adequacy of the current ICD classification of depression in three levels of severity.
Resumo:
Laser trackers have been widely used in many industries to meet increasingly high accuracy requirements. In laser tracker measurement, it is complex and difficult to perform an accurate error analysis and uncertainty evaluation. This paper firstly reviews the working principle of single beam laser trackers and state-of- The- Art of key technologies from both industrial and academic efforts, followed by a comprehensive analysis of uncertainty sources. A generic laser tracker modelling method is formulated and the framework of the virtual tracker is proposed. The VLS can be used for measurement planning, measurement accuracy optimization and uncertainty evaluation. The completed virtual laser tracking system should take all the uncertainty sources affecting coordinate measurement into consideration and establish an uncertainty model which will behave in an identical way to the real system. © Springer-Verlag Berlin Heidelberg 2010.
Resumo:
Introduction: Since 2005, the workload of community pharmacists in England has increased with a concomitant increase in stress and work pressure. However, it is unclear how these factors are impacting on the ability of community pharmacists to ensure accuracy during the dispensing process. This research seeks to extend our understanding of the nature, outcome, and predictors of dispensing errors. Methodology: A retrospective analysis of a purposive sample of incident report forms (IRFs) from the database of a pharmacist indemnity insurance provider was conducted. Data collected included; type of error, degree of harm caused, pharmacy and pharmacist demographics, and possible contributory factors. Results: In total, 339 files from UK community pharmacies were retrieved from the database. The files dated from June 2006 to November 2011. Incorrect item (45.1%, n = 153/339) followed by incorrect strength (24.5%, n = 83/339) were the most common forms of error. Almost half (41.6%, n = 147/339) of the patients suffered some form of harm ranging from minor harm (26.7%, n = 87/339) to death (0.3%, n = 1/339). Insufficient staff (51.6%, n = 175/339), similar packaging (40.7%, n = 138/339) and the pharmacy being busier than normal (39.5%, n = 134/339) were identified as key contributory factors. Cross-tabular analysis against the final accuracy check variable revealed significant association between the pharmacy location (P < 0.024), dispensary layout (P < 0.025), insufficient staff (P < 0.019), and busier than normal (P < 0.005) variables. Conclusion: The results provide an overview of some of the individual, organisational and technical factors at play at the time of a dispensing error and highlight the need to examine further the relationships between these factors and dispensing error occurrence.