40 resultados para workload


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: The aim of this study is to describe the ocular and demographic features of Caucasian patients newly presenting with primary angle closure glaucoma and the proportion of workload it represents at a tertiary university hospital glaucoma service. Methods: A retrospective case notes review was conducted for all Caucasian patients newly diagnosed with narrow angles, primary angle closure, acute primary angle closure and primary angle closure glaucoma that were seen over a period of 2 years. Demographic and ocular variables were compared and statistical analysis was carried out with the paired t -test and chi-squared test. Number of primary open angle closure glaucoma and acute angle closure cases were compared with total number of new referrals to the department, new patients diagnosed with glaucoma and population numbers for the North East of Scotland. Results: One hundred and four patients were analysed. Twenty-four (23.1%) had narrow angles, 30 (28.8%) had primary angle closure and 50 (48.1%) had primary angle closure glaucoma. Twelve (11.5%) presented with acute primary angle closure. There was no significant difference for gender, age, hypermetropia or visual acuity between groups. Primary angle closure glaucoma constituted 22.9% (50/128) of newly diagnosed glaucoma cases. Based on the 2001 Scotland census, the crude annual incidence of newly diagnosed primary angle closure glaucoma was estimated at 14.8 per 100000 and 3.6 per 100000 for acute primary angle closure in the over-45-year-old population. Conclusion: Our study confirms that primary angle closure glaucoma is uncommon in Caucasians, but not as rare as originally perceived as it makes up a fair proportion (22.9%) of glaucoma workload. © 2009 The Authors Journal Compilation © 2009 Royal Australian and New Zealand College of Ophthalmologists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives: Study objectives were to investigate the prevalence and causes of prescribing errors amongst foundation doctors (i.e. junior doctors in their first (F1) or second (F2) year of post-graduate training), describe their knowledge and experience of prescribing errors, and explore their self-efficacy (i.e. confidence) in prescribing.

Method: A three-part mixed-methods design was used, comprising: prospective observational study; semi-structured interviews and cross-sectional survey. All doctors prescribing in eight purposively selected hospitals in Scotland participated. All foundation doctors throughout Scotland participated in the survey. The number of prescribing errors per patient, doctor, ward and hospital, perceived causes of errors and a measure of doctors’ self-efficacy were established.

Results: 4710 patient charts and 44,726 prescribed medicines were reviewed. There were 3364 errors, affecting 1700 (36.1%) charts (overall error rate: 7.5%; F1:7.4%; F2:8.6%; consultants:6.3%). Higher error rates were associated with : teaching hospitals (p,0.001), surgical (p = ,0.001) or mixed wards (0.008) rather thanmedical ward, higher patient turnover wards (p,0.001), a greater number of prescribed medicines (p,0.001) and the months December and June (p,0.001). One hundred errors were discussed in 40 interviews. Error causation was multi-factorial; work environment and team factors were particularly noted. Of 548 completed questionnaires (national response rate of 35.4%), 508 (92.7% of respondents) reported errors, most of which (328 (64.6%) did not reach the patient. Pressure from other staff, workload and interruptions were cited as the main causes of errors. Foundation year 2 doctors reported greater confidence than year 1 doctors in deciding the most appropriate medication regimen.

Conclusions: Prescribing errors are frequent and of complex causation. Foundation doctors made more errors than other doctors, but undertook the majority of prescribing, making them a key target for intervention. Contributing causes included work environment, team, task, individual and patient factors. Further work is needed to develop and assess interventions that address these.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modeling dynamical systems represents an important application class covering a wide range of disciplines including but not limited to biology, chemistry, finance, national security, and health care. Such applications typically involve large-scale, irregular graph processing, which makes them difficult to scale due to the evolutionary nature of their workload, irregular communication and load imbalance. EpiSimdemics is such an application simulating epidemic diffusion in extremely large and realistic social contact networks. It implements a graph-based system that captures dynamics among co-evolving entities. This paper presents an implementation of EpiSimdemics in Charm++ that enables future research by social, biological and computational scientists at unprecedented data and system scales. We present new methods for application-specific processing of graph data and demonstrate the effectiveness of these methods on a Cray XE6, specifically NCSA's Blue Waters system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rationale, aims and objectives: This study aims to examine the public's knowledge and perceptions of connected health (CH).

Methods: A structured questionnaire was administered by face-to-face interview to an opportunistic sample of 1003 members of the public in 11 shopping centres across Northern Ireland (NI). Topics included public knowledge of CH, opinions about who should provide CH and views about the use of computers in health care. Multivariable analyses were conducted to assess respondents' willingness to use CH in the future.

Results: Sixty-seven per cent of respondents were female, 31% were less than 30 years old and 22% were over 60 years. Most respondents had never heard of CH (92%). Following a standard definition, the majority felt CH was a good idea (≈90%) and that general practitioners were in the best position to provide CH; however, respondents were equivocal about reductions in health care professionals' workload and had some concerns about the ease of device use. Factors positively influencing willingness to use CH in the future included knowledge of someone who has a chronic disease, residence in NI since birth and less concern about the use of information technology (IT) in health care. Those over 60 years old or who felt threatened by the use of IT to store personal health information were less willing to use CH in the future.

Conclusion: Increased public awareness and education about CH is required to alleviate concerns and increase the acceptability of this type of care.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This programme of research aimed to understand the extent to which current UK medical graduates are prepared for practice. Commissioned by the General Medical Council, we conducted: (1) A Rapid Review of the literature between 2009 and 2013; (2) narrative interviews with a range of stakeholders; and (3) longitudinal audio-diaries with Foundation Year 1 doctors. The Rapid Review (RR) resulted in data from 81 manuscripts being extracted and mapped against a coding framework (including outcomes from Tomorrow's Doctors (2009) (TD09)). A narrative synthesis of the data was undertaken. Narrative interviews were conducted with 185 participants from 8 stakeholder groups: F1 trainees, newly registered trainee doctors, clinical educators, undergraduate and postgraduate deans and foundation programme directors, other healthcare professionals, employers, policy and government and patient and public representatives. Longitudinal audio-diaries were recorded by 26 F1 trainees over 4 months. The data were analysed thematically and mapped against TD09. Together these data shed light onto how preparedness for practice is conceptualised, measured, how prepared UK medical graduates are for practice, the effectiveness of transition interventions and the currently debated issue of bringing full registration forward to align with medical students’ graduation. Preparedness for practice was conceptualised as both a long- and short-term venture that included personal readiness as well as knowledge, skills and attitudes. It has mainly been researched using self-report measures of generalised incidents that have been shown to be problematic. In terms of transition interventions: assistantships were found to be valuable and efficacious for proactive students as team members, shadowing is effective when undertaken close to employment/setting of F1 post and induction is generally effective but of inconsistent quality. The August transition was highlighted in our interview and audio-diary data where F1s felt unprepared, particularly for the step-change in responsibility, workload, degree of multitasking and understanding where to go for help. Evidence of preparedness for specific tasks, skills and knowledge was contradictory: trainees are well prepared for some practical procedures but not others, reasonably well prepared for history taking and full physical examinations, but mostly unprepared for adopting an holistic understanding of the patient, involving patients in their care, safe and legal prescribing, diagnosing and managing complex clinical conditions and providing immediate care in medical emergencies. Evidence for preparedness for interactional and interpersonal aspects of practice was inconsistent with some studies in the RR suggesting graduates were prepared for team working and communicating with colleagues and patients, but other studies contradicting this. Interview and audio-diary data highlights concerns around F1s preparedness for communicating with angry or upset patients and relatives, breaking bad news, communicating with the wider team (including interprofessionally) and handover communication. There was some evidence in the RR to suggest that graduates were unprepared for dealing with error and safety incidents and lack an understanding of how the clinical environment works. Interview and audio-diary data backs this up, adding that F1s are also unprepared for understanding financial aspects of healthcare. In terms of being personally prepared, RR, interview and audio diary evidence is mixed around graduates’ preparedness for identifying their own limitations, but all data points to graduates’ difficulties in the domain of time management. In terms of personal and situational demographic factors, the RR found that gender did not typically predict perceptions of preparedness, but graduates from more recent cohorts, graduate entry students, graduates from problem based learning courses, UK educated graduates and graduates with an integrated degree reported feeling better prepared. The longitudinal audio-diaries provided insights into the preparedness journey for F1s. There seems to be a general development in the direction of trainees feeling more confident and competent as they gain more experience. However, these developments were not necessarily linear as challenging circumstances (e.g. new specialty, new colleagues, lack of staffing) sometimes made them feel unprepared for situations where they had previously indicated preparedness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Low-power processors and accelerators that were originally designed for the embedded systems market are emerging as building blocks for servers. Power capping has been actively explored as a technique to reduce the energy footprint of high-performance processors. The opportunities and limitations of power capping on the new low-power processor and accelerator ecosystem are less understood. This paper presents an efficient power capping and management infrastructure for heterogeneous SoCs based on hybrid ARM/FPGA designs. The infrastructure coordinates dynamic voltage and frequency scaling with task allocation on a customised Linux system for the Xilinx Zynq SoC. We present a compiler-assisted power model to guide voltage and frequency scaling, in conjunction with workload allocation between the ARM cores and the FPGA, under given power caps. The model achieves less than 5% estimation bias to mean power consumption. In an FFT case study, the proposed power capping schemes achieve on average 97.5% of the performance of the optimal execution and match the optimal execution in 87.5% of the cases, while always meeting power constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many countries formal or informal palliative care networks (PCNs) have evolved to better integrate community-based services for individuals with a life-limiting illness. We conducted a cross-sectional survey using a customized tool to determine the perceptions of the processes of palliative care delivery reflective of horizontal integration from the perspective of nurses, physicians and allied health professionals working in a PCN, as well as to assess the utility of this tool. The process elements examined were part of a conceptual framework for evaluating integration of a system of care and centred on interprofessional collaboration. We used the Index of Interdisciplinary Collaboration (IIC) as a basis of measurement. The 86 respondents (85% response rate) placed high value on working collaboratively and most reported being part of an interprofessional team. The survey tool showed utility in identifying strengths and gaps in integration across the network and in detecting variability in some factors according to respondent agency affiliation and profession. Specifically, support for interprofessional communication and evaluative activities were viewed as insufficient. Impediments to these aspects of horizontal integration may be reflective of workload constraints, differences in agency operations or an absence of key structural features.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motivation for this study was to reduce physics workload relating to patient- specific quality assurance (QA). VMAT plan delivery accuracy was determined from analysis of pre- and on-treatment trajectory log files and phantom-based ionization chamber array measurements. The correlation in this combination of measurements for patient-specific QA was investigated. The relationship between delivery errors and plan complexity was investigated as a potential method to further reduce patient-specific QA workload. Thirty VMAT plans from three treatment sites - prostate only, prostate and pelvic node (PPN), and head and neck (H&N) - were retrospectively analyzed in this work. The 2D fluence delivery reconstructed from pretreatment and on-treatment trajectory log files was compared with the planned fluence using gamma analysis. Pretreatment dose delivery verification was also car- ried out using gamma analysis of ionization chamber array measurements compared with calculated doses. Pearson correlations were used to explore any relationship between trajectory log file (pretreatment and on-treatment) and ionization chamber array gamma results (pretreatment). Plan complexity was assessed using the MU/ arc and the modulation complexity score (MCS), with Pearson correlations used to examine any relationships between complexity metrics and plan delivery accu- racy. Trajectory log files were also used to further explore the accuracy of MLC and gantry positions. Pretreatment 1%/1 mm gamma passing rates for trajectory log file analysis were 99.1% (98.7%-99.2%), 99.3% (99.1%-99.5%), and 98.4% (97.3%-98.8%) (median (IQR)) for prostate, PPN, and H&N, respectively, and were significantly correlated to on-treatment trajectory log file gamma results (R = 0.989, p < 0.001). Pretreatment ionization chamber array (2%/2 mm) gamma results were also significantly correlated with on-treatment trajectory log file gamma results (R = 0.623, p < 0.001). Furthermore, all gamma results displayed a significant correlation with MCS (R > 0.57, p < 0.001), but not with MU/arc. Average MLC position and gantry angle errors were 0.001 ± 0.002 mm and 0.025° ± 0.008° over all treatment sites and were not found to affect delivery accuracy. However, vari- ability in MLC speed was found to be directly related to MLC position accuracy. The accuracy of VMAT plan delivery assessed using pretreatment trajectory log file fluence delivery and ionization chamber array measurements were strongly correlated with on-treatment trajectory log file fluence delivery. The strong corre- lation between trajectory log file and phantom-based gamma results demonstrates potential to reduce our current patient-specific QA. Additionally, insight into MLC and gantry position accuracy through trajectory log file analysis and the strong cor- relation between gamma analysis results and the MCS could also provide further methodologies to both optimize the VMAT planning and QA process. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Child protection social work is acknowledged as a very stressful occupation, with high turnover and poor retention of staff being a major concern. This paper highlights themes that emerged from findings of sixty-five articles that were included as part of a systematic literature review. The review focused on the evaluation of research findings, which considered individual and organisational factors associated with resilience or burnout in child protection social work staff. The results identified a range of individual and organisational themes for staff in child protection social work. Nine themes were identified in total. These are categorised under ‘Individual’ and ‘Organisational’ themes. Themes categorised as individual included personal history of maltreatment, training and preparation for child welfare, coping, secondary traumatic stress, compassion fatigue and compassion satisfaction. Those classified as organisational included workload, social support and supervision, organisational culture and climate, organisational and professional commitment, and job satisfaction or dissatisfaction. The range of factors is discussed with recommendations and areas for future research are highlighted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter discusses opportunities and limitations of height inequality, especially the role of social status and income distribution in determining height inequality. The more unequal the income distribution in a society, the more unequal the corresponding height distribution. At one time, the height gap between rich and poor teenagers in industrializing England was as high as 22 cm (8.7 inches); today, height inequality tends to be much lower (on the order of a few centimeters) because the gap between rich and poor in developed countries tends to be smaller. Results presented here suggest that height inequality is driven by differences in purchasing power, education, physical workload, and epidemiological environment. In a modern setting, social safety and redistribution of income is also relevant. An introduction into the literature helps illustrate opportunities this methodology has to offer to understand better the dynamics of the way populations experience economic development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction
The use of video capture of lectures in Higher Education is not a recent occurrence with web based learning technologies including digital recording of live lectures becoming increasing commonly offered by universities throughout the world (Holliman and Scanlon, 2004). However in the past decade the increase in technical infrastructural provision including the availability of high speed broadband has increased the potential and use of videoed lecture capture. This had led to a variety of lecture capture formats including pod casting, live streaming or delayed broadcasting of whole or part of lectures.
Additionally in the past five years there has been a significant increase in the popularity of online learning, specifically via Massive Open Online Courses (MOOCs) (Vardi, 2014). One of the key aspects of MOOCs is the simulated recording of lecture like activities. There has been and continues to be much debate on the consequences of the popularity of MOOCs, especially in relation to its potential uses within established University programmes.
There have been a number of studies dedicated to the effects of videoing lectures.
The clustered areas of research in video lecture capture have the following main themes:
• Staff perceptions including attendance, performance of students and staff workload
• Reinforcement versus replacement of lectures
• Improved flexibility of learning
• Facilitating engaging and effective learning experiences
• Student usage, perception and satisfaction
• Facilitating students learning at their own pace
Most of the body of the research has concentrated on student and faculty perceptions, including academic achievement, student attendance and engagement (Johnston et al, 2012).
Generally the research has been positive in review of the benefits of lecture capture for both students and faculty. This perception coupled with technical infrastructure improvements and student demand may well mean that the use of video lecture capture will continue to increase in frequency in the next number of years in tertiary education. However there is a relatively limited amount of research in the effects of lecture capture specifically in the area of computer programming with Watkins 2007 being one of few studies . Video delivery of programming solutions is particularly useful for enabling a lecturer to illustrate the complex decision making processes and iterative nature of the actual code development process (Watkins et al 2007). As such research in this area would appear to be particularly appropriate to help inform debate and future decisions made by policy makers.
Research questions and objectives
The purpose of the research was to investigate how a series of lecture captures (in which the audio of lectures and video of on-screen projected content were recorded) impacted on the delivery and learning of a programme of study in an MSc Software Development course in Queen’s University, Belfast, Northern Ireland. The MSc is conversion programme, intended to take graduates from non-computing primary degrees and upskill them in this area. The research specifically targeted the Java programming module within the course. It also analyses and reports on the empirical data from attendances and various video viewing statistics. In addition, qualitative data was collected from staff and student feedback to help contextualise the quantitative results.
Methodology, Methods and Research Instruments Used
The study was conducted with a cohort of 85 post graduate students taking a compulsory module in Java programming in the first semester of a one year MSc in Software Development. A pre-course survey of students found that 58% preferred to have available videos of “key moments” of lectures rather than whole lectures. A large scale study carried out by Guo concluded that “shorter videos are much more engaging” (Guo 2013). Of concern was the potential for low audience retention for videos of whole lectures.
The lecturers recorded snippets of the lecture directly before or after the actual physical delivery of the lecture, in a quiet environment and then upload the video directly to a closed YouTube channel. These snippets generally concentrated on significant parts of the theory followed by theory related coding demonstration activities and were faithful in replication of the face to face lecture. Generally each lecture was supported by two to three videos of durations ranging from 20 – 30 minutes.
Attendance
The MSc programme has several attendance based modules of which Java Programming was one element. In order to assess the consequence on attendance for the Programming module a control was established. The control used was a Database module which is taken by the same students and runs in the same semester.
Access engagement
The videos were hosted on a closed YouTube channel made available only to the students in the class. The channel had enabled analytics which reported on the following areas for all and for each individual video; views (hits), audience retention, viewing devices / operating systems used and minutes watched.
Student attitudes
Three surveys were taken in regard to investigating student attitudes towards the videoing of lectures. The first was before the start of the programming module, then at the mid-point and subsequently after the programme was complete.
The questions in the first survey were targeted at eliciting student attitudes towards lecture capture before they had experienced it in the programme. The midpoint survey gathered data in relation to how the students were individually using the system up to that point. This included feedback on how many videos an individual had watched, viewing duration, primary reasons for watching and the result on attendance, in addition to probing for comments or suggestions. The final survey on course completion contained questions similar to the midpoint survey but in summative view of the whole video programme.
Conclusions and Outcomes
The study confirmed findings of other such investigations illustrating that there is little or no effect on attendance at lectures. The use of the videos appears to help promote continual learning but they are particularly accessed by students at assessment periods. Students respond positively to the ability to access lectures digitally, as a means of reinforcing learning experiences rather than replacing them. Feedback from students was overwhelmingly positive indicating that the videos benefited their learning. Also there are significant benefits to part recording of lectures rather than recording whole lectures. The behaviour viewing trends analytics suggest that despite the increase in the popularity of online learning via MOOCs and the promotion of video learning on mobile devices in fact in this study the vast majority of students accessed the online videos at home on laptops or desktops However, in part, this is likely due to the nature of the taught subject, that being programming.
The research involved prerecording the lecture in smaller timed units and then uploading for distribution to counteract existing quality issues with recording entire live lectures. However the advancement and consequential improvement in quality of in situ lecture capture equipment may well help negate the need to record elsewhere. The research has also highlighted an area of potentially very significant use for performance analysis and improvement that could have major implications for the quality of teaching. A study of the analytics of the viewings of the videos could well provide a quick response formative feedback mechanism for the lecturer. If a videoed lecture either recorded live or later is a true reflection of the face to face lecture an analysis of the viewing patterns for the video may well reveal trends that correspond with the live delivery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a rigorous methodology and new metrics for fair comparison of server and microserver platforms. Deploying our methodology and metrics, we compare a microserver with ARM cores against two servers with ×86 cores running the same real-time financial analytics workload. We define workload-specific but platform-independent performance metrics for platform comparison, targeting both datacenter operators and end users. Our methodology establishes that a server based on the Xeon Phi co-processor delivers the highest performance and energy efficiency. However, by scaling out energy-efficient microservers, we achieve competitive or better energy efficiency than a power-equivalent server with two Sandy Bridge sockets, despite the microserver's slower cores. Using a new iso-QoS metric, we find that the ARM microserver scales enough to meet market throughput demand, that is, a 100% QoS in terms of timely option pricing, with as little as 55% of the energy consumed by the Sandy Bridge server.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Realistic Evaluation of EWS and ALERT: factors enabling and constraining implementation Background The implementation of EWS and ALERT in practice is essential to the success of Rapid Response Systems but is dependent upon nurses utilising EWS protocols and applying ALERT best practice guidelines. To date there is limited evidence on the effectiveness of EWS or ALERT as research has primarily focused on measuring patient outcomes (cardiac arrests, ICU admissions) following the implementation of a Rapid Response Team. Complex interventions in healthcare aimed at changing service delivery and related behaviour of health professionals require a different research approach to evaluate the evidence. To understand how and why EWS and ALERT work, or might not work, research needs to consider the social, cultural and organisational influences that will impact on successful implementation in practice. This requires a research approach that considers both the processes and outcomes of complex interventions, such as EWS and ALERT, implemented in practice. Realistic Evaluation is such an approach and was used to explain the factors that enable and constrain the implementation of EWS and ALERT in practice [1]. Aim The aim of this study was to evaluate factors that enabled and constrained the implementation and service delivery of early warnings systems (EWS) and ALERT in practice in order to provide direction for enabling their success and sustainability. Methods The research design was a multiple case study approach of four wards in two hospitals in Northern Ireland. It followed the principles of realist evaluation research which allowed empirical data to be gathered to test and refine RRS programme theory. This approach used a variety of mixed methods to test the programme theories including individual and focus group interviews, observation and documentary analysis in a two stage process. A purposive sample of 75 key informants participated in individual and focus group interviews. Observation and documentary analysis of EWS compliance data and ALERT training records provided further evidence to support or refute the interview findings. Data was analysed using NVIVO8 to categorise interview findings and SPSS for ALERT documentary data. These findings were further synthesised by undertaking a within and cross case comparison to explain the factors enabling and constraining EWS and ALERT. Results A cross case analysis highlighted similarities, differences and factors enabling or constraining successful implementation across the case study sites. Findings showed that personal (confidence; clinical judgement; personality), social (ward leadership; communication), organisational (workload and staffing issues; pressure from managers to complete EWS audit and targets), educational (constraints on training; no clinical educator on ward) and cultural (routine task delegated) influences impact on EWS and acute care training outcomes. There were also differences noted between medical and surgical wards across both case sites. Conclusions Realist Evaluation allows refinement and development of the RRS programme theory to explain the realities of practice. These refined RRS programme theories are capable of informing the planning of future service provision and provide direction for enabling their success and sustainability. References: 1. McGaughey J, Blackwood B, O’Halloran P, Trinder T. J. & Porter S. (2010) A realistic evaluation of Track and Trigger systems and acute care training for early recognition and management of deteriorating ward–based patients. Journal of Advanced Nursing 66 (4), 923-932. Type of submission: Concurrent session Source of funding: Sandra Ryan Fellowship funded by the School of Nursing & Midwifery, Queen’s University of Belfast

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pre-processing (PP) of received symbol vector and channel matrices is an essential pre-requisite operation for Sphere Decoder (SD)-based detection of Multiple-Input Multiple-Output (MIMO) wireless systems. PP is a highly complex operation, but relative to the total SD workload it represents a relatively small fraction of the overall computational cost of detecting an OFDM MIMO frame in standards such as 802.11n. Despite this, real-time PP architectures are highly inefficient, dominating the resource cost of real-time SD architectures. This paper resolves this issue. By reorganising the ordering and QR decomposition sub operations of PP, we describe a Field Programmable Gate Array (FPGA)-based PP architecture for the Fixed Complexity Sphere Decoder (FSD) applied to 4 × 4 802.11n MIMO which reduces resource cost by 50% as compared to state-of-the-art solutions whilst maintaining real-time performance.