860 resultados para implementation analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Improving the transparency of information about the quality of health care providers is one way to improve health care quality. It is assumed that Internet information steers patients toward better-performing health care providers and will motivate providers to improve quality. However, the effect of public reporting on hospital quality is still small. One of the reasons is that users find it difficult to understand the formats in which information is presented. Objective: We analyzed the presentation of risk-adjusted mortality rate (RAMR) for coronary angiography in the 10 most commonly used German public report cards to analyze the impact of information presentation features on their comprehensibility. We wanted to determine which information presentation features were utilized, were preferred by users, led to better comprehension, and had similar effects to those reported in evidence-based recommendations described in the literature. Methods: The study consisted of 5 steps: (1) identification of best-practice evidence about the presentation of information on hospital report cards; (2) selection of a single risk-adjusted quality indicator; (3) selection of a sample of designs adopted by German public report cards; (4) identification of the information presentation elements used in public reporting initiatives in Germany; and (5) an online panel completed an online questionnaire that was conducted to determine if respondents were able to identify the hospital with the lowest RAMR and if respondents’ hospital choices were associated with particular information design elements. Results: Evidence-based recommendations were made relating to the following information presentation features relevant to report cards: evaluative table with symbols, tables without symbols, bar charts, bar charts without symbols, bar charts with symbols, symbols, evaluative word labels, highlighting, order of providers, high values to indicate good performance, explicit statements of whether high or low values indicate good performance, and incomplete data (“N/A” as a value). When investigating the RAMR in a sample of 10 hospitals’ report cards, 7 of these information presentation features were identified. Of these, 5 information presentation features improved comprehensibility in a manner reported previously in literature. Conclusions: To our knowledge, this is the first study to systematically analyze the most commonly used public reporting card designs used in Germany. Best-practice evidence identified in international literature was in agreement with 5 findings about German report card designs: (1) avoid tables without symbols, (2) include bar charts with symbols, (3) state explicitly whether high or low values indicate good performance or provide a “good quality” range, (4) avoid incomplete data (N/A given as a value), and (5) rank hospitals by performance. However, these findings are preliminary and should be subject of further evaluation. The implementation of 4 of these recommendations should not present insurmountable obstacles. However, ranking hospitals by performance may present substantial difficulties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image processing offers unparalleled potential for traffic monitoring and control. For many years engineers have attempted to perfect the art of automatic data abstraction from sequences of video images. This paper outlines a research project undertaken at Napier University by the authors in the field of image processing for automatic traffic analysis. A software based system implementing TRIP algorithms to count cars and measure vehicle speed has been developed by members of the Transport Engineering Research Unit (TERU) at the University. The TRIP algorithm has been ported and evaluated on an IBM PC platform with a view to hardware implementation of the pre-processing routines required for vehicle detection. Results show that a software based traffic counting system is realisable for single window processing. Due to the high volume of data required to be processed for full frames or multiple lanes, system operations in real time are limited. Therefore specific hardware is required to be designed. The paper outlines a hardware design for implementation of inter-frame and background differencing, background updating and shadow removal techniques. Preliminary results showing the processing time and counting accuracy for the routines implemented in software are presented and a real time hardware pre-processing architecture is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of Open Access (OA) policies that have been adopted by universities, research institutes and research funders has been increasing at a fast pace. The Registry of Open Access Repository Mandates and Policies (ROARMAP) records the existence of 724 OA policies across the world, of which 512 have been adopted by universities and research institutions. The UK is one of the leading countries in terms of OA policy development and implementation with a total of 85 institutional1 and an estimated 35 funder2 OA policies. In order to understand and contextualise how OA policies are developed and how they can be effectively implemented and aligned, this brief looks at two areas. The first section provides an overview on the processes evolving around policy making, policy effectiveness and policy alignment. In particular, it summarises the criteria and elements generally specified in OA policies, it points out some of the relevant steps informing the development, monitoring and revision of OA policies, it outlines what OA policy elements contribute to policy effectiveness, and highlights the benefits in aligning OA policies. The second section revisits the issues previously discussed within the context of the UK institutional (universities) OA policy landscape.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a long history of debate around mathematics standards, reform efforts, and accountability. This research identified ways that national expectations and context drive local implementation of mathematics reform efforts and identified the external and internal factors that impact teachers’ acceptance or resistance to policy implementation at the local level. This research also adds to the body of knowledge about acceptance and resistance to policy implementation efforts. This case study involved the analysis of documents to provide a chronological perspective, assess the current state of the District’s mathematics reform, and determine the District’s readiness to implement the Common Core Curriculum. The school system in question has continued to struggle with meeting the needs of all students in Algebra 1. Therefore, the results of this case study will be useful to the District’s leaders as they include the compilation and analysis of a decade’s worth of data specific to Algebra 1.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientific applications rely heavily on floating point data types. Floating point operations are complex and require complicated hardware that is both area and power intensive. The emergence of massively parallel architectures like Rigel creates new challenges and poses new questions with respect to floating point support. The massively parallel aspect of Rigel places great emphasis on area efficient, low power designs. At the same time, Rigel is a general purpose accelerator and must provide high performance for a wide class of applications. This thesis presents an analysis of various floating point unit (FPU) components with respect to Rigel, and attempts to present a candidate design of an FPU that balances performance, area, and power and is suitable for massively parallel architectures like Rigel.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many existing encrypted Internet protocols leak information through packet sizes and timing. Though seemingly innocuous, prior work has shown that such leakage can be used to recover part or all of the plaintext being encrypted. The prevalence of encrypted protocols as the underpinning of such critical services as e-commerce, remote login, and anonymity networks and the increasing feasibility of attacks on these services represent a considerable risk to communications security. Existing mechanisms for preventing traffic analysis focus on re-routing and padding. These prevention techniques have considerable resource and overhead requirements. Furthermore, padding is easily detectable and, in some cases, can introduce its own vulnerabilities. To address these shortcomings, we propose embedding real traffic in synthetically generated encrypted cover traffic. Novel to our approach is our use of realistic network protocol behavior models to generate cover traffic. The observable traffic we generate also has the benefit of being indistinguishable from other real encrypted traffic further thwarting an adversary's ability to target attacks. In this dissertation, we introduce the design of a proxy system called TrafficMimic that implements realistic cover traffic tunneling and can be used alone or integrated with the Tor anonymity system. We describe the cover traffic generation process including the subtleties of implementing a secure traffic generator. We show that TrafficMimic cover traffic can fool a complex protocol classification attack with 91% of the accuracy of real traffic. TrafficMimic cover traffic is also not detected by a binary classification attack specifically designed to detect TrafficMimic. We evaluate the performance of tunneling with independent cover traffic models and find that they are comparable, and, in some cases, more efficient than generic constant-rate defenses. We then use simulation and analytic modeling to understand the performance of cover traffic tunneling more deeply. We find that we can take measurements from real or simulated traffic with no tunneling and use them to estimate parameters for an accurate analytic model of the performance impact of cover traffic tunneling. Once validated, we use this model to better understand how delay, bandwidth, tunnel slowdown, and stability affect cover traffic tunneling. Finally, we take the insights from our simulation study and develop several biasing techniques that we can use to match the cover traffic to the real traffic while simultaneously bounding external information leakage. We study these bias methods using simulation and evaluate their security using a Bayesian inference attack. We find that we can safely improve performance with biasing while preventing both traffic analysis and defense detection attacks. We then apply these biasing methods to the real TrafficMimic implementation and evaluate it on the Internet. We find that biasing can provide 3-5x improvement in bandwidth for bulk transfers and 2.5-9.5x speedup for Web browsing over tunneling without biasing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. Recent literature indicates variance in psychosocial treatment preferences for negative symptoms of schizophrenia. Attempts at defining therapeutic aims and outcomes for negative symptoms to date have not included major stakeholder groups. The aim of the present study was to address this gap through qualitative methods. Design. Thematic Analysis was applied to qualitative semi-structured interview data to gather the opinions of people who experience negative symptoms, carers, and healthcare professionals. Participants were recruited from two mental health sites (inpatient/community) to increase generalisability of results. Ten people participated in the research. Methods. Semi-structured interview scripts were designed utilising evidence from the review in Chapter 1 of effective psychosocial intervention components for specific negative symptoms. Interviews were audio recorded and transcribed verbatim. Thematic analysis was employed to analyse data. Results. A common theme across groups was the need for a personalised approach to intervention for negative symptoms. Other themes indicated different opinions in relation to treatment targets and the need for a sensitive and graded approach to all aspects of therapy. This approach needs to be supported across systemic levels of organisation with specific training needs for staff addressed. Conclusions. There is disparity in treatment preferences for negative symptoms across major stakeholders. The findings suggest an individualised approach to intervention of negative symptoms that is consistent with recovery. Implementation barriers and facilitators were identified and discussed. There remains a need to develop a better understanding of treatment preferences for patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present the development and the implementation of a content analysis model for observing aspects relating to the social mission of the public library on Facebook pages and websites. The model is unique and it was developed from the literature. There were designed the four categories for analysis Generate social capital and social cohesion, Consolidate democracy and citizenship, Social and digital inclusion and Fighting illiteracies. The model enabled the collection and the analysis of data applied to a case study consisting of 99 Portuguese public libraries with Facebook page. With this model of content analysis we observed the facets of social mission and we read the actions with social facets on the Facebook page and in the websites of public libraries. At the end we discuss in parallel the results of observation of the Facebook of libraries and the websites. By reading the description of the actions of the social mission, the general conclusion and the most immediate is that 99 public libraries on Facebook and websites rarely publish social character actions, and the results are little satisfying. The Portuguese public libraries highlight substantially the actions in the category Generate social capital and social cohesion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aiming to introduce a multiresidue analysis for the trace detection of pesticide residues belonging to organophosphorus and triazine classes from olive oil samples, a new sample preparation methodology comprising the use of a dual layer of “tailor-made” molecularly imprinted polymers (MIPs) SPE for the simultaneous extraction of both pesticides in a single procedure has been attempted. This work has focused on the implementation of a dual MIP-layer SPE procedure (DL-MISPE) encompassing the use of two MIP layers as specific sorbents. In order to achieve higher recovery rates, the amount of MIP layers has been optimized as well as the influence of MIP packaging order. The optimized DL-MISPE approach has been used in the preconcentration of spiked organic olive oil samples with concentrations of dimethoate and terbuthylazine similar to the maximum residue limits and further quantification by HPLC. High recovery rates for dimethoate (95%) and terbuthylazine (94%) have been achieved with good accuracy and precision. Overall, this work constitutes the first attempt on the development of a dual pesticide residue methodology for the trace analysis of pesticide residues based on molecular imprinting technology. Thus, DL-MISPE constitutes a reliable, robust, and sensitive sample preparation methodology that enables preconcentration of the target pesticides in complex olive oil samples, even at levels similar to the maximum residue limits enforced by the legislation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks (WSNs) are the key enablers of the internet of things (IoT) paradigm. Traditionally, sensor network research has been to be unlike the internet, motivated by power and device constraints. The IETF 6LoWPAN draft standard changes this, defining how IPv6 packets can be efficiently transmitted over IEEE 802.15.4 radio links. Due to this 6LoWPAN technology, low power, low cost micro- controllers can be connected to the internet forming what is known as the wireless embedded internet. Another IETF recommendation, CoAP allows these devices to communicate interactively over the internet. The integration of such tiny, ubiquitous electronic devices to the internet enables interesting real-time applications. This thesis work attempts to evaluate the performance of a stack consisting of CoAP and 6LoWPAN over the IEEE 802.15.4 radio link using the Contiki OS and Cooja simulator, along with the CoAP framework Californium (Cf). Ultimately, the implementation of this stack on real hardware is carried out using a raspberry pi as a border router with T-mote sky sensors as slip radios and CoAP servers relaying temperature and humidity data. The reliability of the stack was also demonstrated during scalability analysis conducted on the physical deployment. The interoperability is ensured by connecting the WSN to the global internet using different hardware platforms supported by Contiki and without the use of specialized gateways commonly found in non IP based networks. This work therefore developed and demonstrated a heterogeneous wireless sensor network stack, which is IP based and conducted performance analysis of the stack, both in terms of simulations and real hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this study was to determine the impact of innovation on productivity in service sector companies — especially those in the hospitality sector — that value the reduction of environmental impact as relevant to the innovation process. We used a structural analysis model based on the one developed by Crépon, Duguet, and Mairesse (1998). This model is known as the CDM model (an acronym of the authors’ surnames). These authors developed seminal studies in the field of the relationships between innovation and productivity (see Griliches 1979; Pakes and Grilliches 1980). The main advantage of the CDM model is its ability to integrate the process of innovation and business productivity from an empirical perspective.