898 resultados para sisäinen benchmarking


Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'assistenza sanitaria in Italia e nel mondo è caratterizzata da bisogni elevati in continua crescita, ad essi si contrappone l'attuale crisi delle risorse economiche determinando per il Sistema una valutazione di scelte quali la riduzione o la rimodulazione dell'offerta sanitaria pubblica. L'idea di questo lavoro, nata all'interno dell'Istituto Scientifico Romagnolo per lo Studio e la Cura dei Tumori (IRST) IRCCS, è di approcciare questo problema in ottica di miglioramento delle performance anziché riduzione dei servizi, nella convinzione che vi siano importanti margini di perfezionamento. Per questi motivi si è valutata la necessità di sviluppare un metodo e un'applicazione software per l'identificazione dei percorsi diagnostici terapeutici assistenziali (PDTA), per la raccolta di dati dalle strutture coinvolte, per l'analisi dei costi e dei risultati, mirando ad una analisi di tipo costi - efficacia e di benchmarking in ottica di presa in carico dei bisogni di salute. La tesi descrive la fase di raccolta e analisi dei requisiti comprensiva della profilazione utenti e descrizione di alcuni aspetti dinamici salienti, la fase di progettazione concettuale (schema Entity/Relationship, glossario e volumi dei dati), la fase di progettazione logica e la prototipazione dell'interfaccia utente. Riporta inoltre una valutazione dei tempi di sviluppo realizzata tramite metodologia di calcolo dei punti per caso d'uso. L'applicazione progettata è oggetto di valutazione di fattibilità presso l'IRST, che ha utilizzato alcune delle metodologie descritte nella tesi per descrivere il percorso di patologia mammaria e presentarne i primi risultati all'interno di un progetto di ricerca in collaborazione con l'Agenzia Nazionale per i Servizi Sanitari Regionali (Agenas).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Moderne ESI-LC-MS/MS-Techniken erlauben in Verbindung mit Bottom-up-Ansätzen eine qualitative und quantitative Charakterisierung mehrerer tausend Proteine in einem einzigen Experiment. Für die labelfreie Proteinquantifizierung eignen sich besonders datenunabhängige Akquisitionsmethoden wie MSE und die IMS-Varianten HDMSE und UDMSE. Durch ihre hohe Komplexität stellen die so erfassten Daten besondere Anforderungen an die Analysesoftware. Eine quantitative Analyse der MSE/HDMSE/UDMSE-Daten blieb bislang wenigen kommerziellen Lösungen vorbehalten. rn| In der vorliegenden Arbeit wurden eine Strategie und eine Reihe neuer Methoden zur messungsübergreifenden, quantitativen Analyse labelfreier MSE/HDMSE/UDMSE-Daten entwickelt und als Software ISOQuant implementiert. Für die ersten Schritte der Datenanalyse (Featuredetektion, Peptid- und Proteinidentifikation) wird die kommerzielle Software PLGS verwendet. Anschließend werden die unabhängigen PLGS-Ergebnisse aller Messungen eines Experiments in einer relationalen Datenbank zusammengeführt und mit Hilfe der dedizierten Algorithmen (Retentionszeitalignment, Feature-Clustering, multidimensionale Normalisierung der Intensitäten, mehrstufige Datenfilterung, Proteininferenz, Umverteilung der Intensitäten geteilter Peptide, Proteinquantifizierung) überarbeitet. Durch diese Nachbearbeitung wird die Reproduzierbarkeit der qualitativen und quantitativen Ergebnisse signifikant gesteigert.rn| Um die Performance der quantitativen Datenanalyse zu evaluieren und mit anderen Lösungen zu vergleichen, wurde ein Satz von exakt definierten Hybridproteom-Proben entwickelt. Die Proben wurden mit den Methoden MSE und UDMSE erfasst, mit Progenesis QIP, synapter und ISOQuant analysiert und verglichen. Im Gegensatz zu synapter und Progenesis QIP konnte ISOQuant sowohl eine hohe Reproduzierbarkeit der Proteinidentifikation als auch eine hohe Präzision und Richtigkeit der Proteinquantifizierung erreichen.rn| Schlussfolgernd ermöglichen die vorgestellten Algorithmen und der Analyseworkflow zuverlässige und reproduzierbare quantitative Datenanalysen. Mit der Software ISOQuant wurde ein einfaches und effizientes Werkzeug für routinemäßige Hochdurchsatzanalysen labelfreier MSE/HDMSE/UDMSE-Daten entwickelt. Mit den Hybridproteom-Proben und den Bewertungsmetriken wurde ein umfassendes System zur Evaluierung quantitativer Akquisitions- und Datenanalysesysteme vorgestellt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This is the third paper Kresl and Singh have published on this subject. The first was for an OECD conference that was published in 1995. The second was published in Urban Studies in 1999. Hence in this most recent study they can examine urban competitiveness in the US over a period of three decades. Their methodology is distinctive in that it is statistical rather than subjective, as is the case with studies that use a benchmarking or a structural methodology. Their results can be used by city planners in design of a strategic-economic plan. They also capture the major changes in broad regional competitiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spine Tango is currently the only international spine registry in existence. It was developed under the auspices of Eurospine, the Spine Society of Europe, and is hosted at the University of Bern, Switzerland. The HJD Spine Center successfully tested Spine Tango during a 3-month pilot study and has since expanded documentation activities to more surgeons. Workflow integration and dedicated research staff are key factors for such an endeavor. Participation enables benchmarking against national and international peers and outcome research and quality assurance of surgical and non-surgical treatments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software repositories have been getting a lot of attention from researchers in recent years. In order to analyze software repositories, it is necessary to first extract raw data from the version control and problem tracking systems. This poses two challenges: (1) extraction requires a non-trivial effort, and (2) the results depend on the heuristics used during extraction. These challenges burden researchers that are new to the community and make it difficult to benchmark software repository mining since it is almost impossible to reproduce experiments done by another team. In this paper we present the TA-RE corpus. TA-RE collects extracted data from software repositories in order to build a collection of projects that will simplify extraction process. Additionally the collection can be used for benchmarking. As the first step we propose an exchange language capable of making sharing and reusing data as simple as possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detailed knowledge of the characteristics of the radiation field shaped by a multileaf collimator (MLC) is essential in intensity modulated radiotherapy (IMRT). A previously developed multiple source model (MSM) for a 6 MV beam was extended to a 15 MV beam and supplemented with an accurate model of an 80-leaf dynamic MLC. Using the supplemented MSM and the MC code GEANT, lateral dose distributions were calculated in a water phantom and a portal water phantom. A field which is normally used for the validation of the step and shoot technique and a field from a realistic IMRT treatment plan delivered with dynamic MLC are investigated. To assess possible spectral changes caused by the modulation of beam intensity by an MLC, the energy spectra in five portal planes were calculated for moving slits of different widths. The extension of the MSM to 15 MV was validated by analysing energy fluences, depth doses and dose profiles. In addition, the MC-calculated primary energy spectrum was verified with an energy spectrum which was reconstructed from transmission measurements. MC-calculated dose profiles using the MSM for the step and shoot case and for the dynamic MLC case are in very good agreement with the measured data from film dosimetry. The investigation of a 13 cm wide field shows an increase in mean photon energy of up to 16% for the 0.25 cm slit compared to the open beam for 6 MV and of up to 6% for 15 MV, respectively. In conclusion, the MSM supplemented with the dynamic MLC has proven to be a powerful tool for investigational and benchmarking purposes or even for dose calculations in IMRT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE OF REVIEW: Intensive care medicine consumes a high share of healthcare costs, and there is growing pressure to use the scarce resources efficiently. Accordingly, organizational issues and quality management have become an important focus of interest in recent years. Here, we will review current concepts of how outcome data can be used to identify areas requiring action. RECENT FINDINGS: Using recently established models of outcome assessment, wide variability between individual ICUs is found, both with respect to outcome and resource use. Such variability implies that there are large differences in patient care processes not only within the ICU but also in pre-ICU and post-ICU care. Indeed, measures to improve the patient process in the ICU (including care of the critically ill, patient safety, and management of the ICU) have been presented in a number of recently published papers. SUMMARY: Outcome assessment models provide an important framework for benchmarking. They may help the individual ICU to spot appropriate fields of action, plan and initiate quality improvement projects, and monitor the consequences of such activity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Panama, one of the Environmental Health (EH) Sector’s primary goals is to improve the health of rural Panamanians by helping them to adopt behaviors and practices that improve access to and use of sanitation systems. In complying with this goal, the EH sector has used participatory development models to improve hygiene and increase access to latrines through volunteer managed latrine construction projects. Unfortunately, there is little understanding of the long term sustainability of these interventions after the volunteers have completed their service. With the Peace Corps adapting their Monitoring, Reporting, and Evaluation procedures, it is appropriate to evaluate the sustainability of sanitation interventions offering recommendations for the adaptions of the EH training program, project management, and evaluation procedures. Recognizing the need for evaluation of past latrine projects, the author performed a post project assessment of 19 pit latrine projects using participatory analysis methodologies. First, the author reviewed volunteers’ perspectives of pit latrine projects in a survey. Then, for comparison, the author performed a survey of latrine projects using a benchmarking scoring system to rate solid waste management, drainage, latrine siting, latrine condition, and hygiene. It was observed that the Sanitation WASH matrix created by the author was an effective tool for evaluating the efficacy of sanitation interventions. Overall more than 75%, of latrines constructed were in use. However, there were some areas where improvements could be made for both latrine construction and health and hygiene. The latrines scored poorly on the indicators related to the privacy structure and seat covers. Interestingly those are the two items least likely to be included in project subsidies. Furthermore, scores for hygiene-related indicators were low; particularly those related to hand washing and cleanliness of the kitchen, indicating potential for improvement in hygiene education. Based on these outcomes, the EH sector should consider including subsidies and standardized designs for privacy structures and seat covers for latrines. In addition, the universal adoption of contracts and/or deposits for project beneficiaries is expected to improve the completion of latrines. In order to address the low scores in the health and hygiene indicators, the EH sector should adapt volunteer training, in addition to standardizing health and hygiene intervention procedures. In doing so, the sector should mimic the Community Health Club model that has shown success in improving health and hygiene indicators, as well as use a training session plan format similar to those in the Water Committee Seminar manual. Finally, the sector should have an experienced volunteer dedicated to program oversight and post-project monitoring and evaluation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Das Management komplexer, interorganisatorischer und durch eine Vielzahl unterschiedlicher Beziehungen miteinander verknüpfter Strukturen und Prozesse, wie sie großen Netzen der Logistik zu-grunde liegen, stellt für die verantwortlichen Akteure eine anspruchsvolle Aufgabe dar. Der nachfolgende Beitrag stellt einen systematischen Ansatz vor, mit dem Netzwerkmanager bei der Erfüllung dieser Auf-gabe durch eine verbesserte Informationsgrundlage über verfügbare Methoden und Gestaltungsoptionen unterstützt werden sollen. Exemplarisch demonstriert wird die Funktionsweise dieses Ansatzes anhand des umfangreichen Instrumentariums des Netzwerk-Controlling, welches zur Schaffung von Transparenz in den unternehmensübergreifenden Prozessen zur Verfügung steht.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gegenstand des vorliegenden Beitrages ist eine Methode zur Kosten- und Leistungsbewertung von Containerschiffen als Transportmittel des Hauptlaufes in intermodalen Transportketten für ISO-Container. Anlass bildet die permanente Größenentwicklung der Containerschiffe und die daraufhin ausgerichtete Infrastruktur- und Transportkettenentwicklung im Vor- und Nachlauf, die nicht risikofrei zu beurteilen ist. Mit der vorgestellten Methode wird deutlich, dass die Erfolgs- bzw. Misserfolgsfaktoren der Großcontainerschiffe fast nur noch in den Häfen und deren Hinterlandanbindungen zu suchen sind.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zur Optimierung innerbetrieblicher Logistikprozesse ist eine ganzheitliche Prozessdarstellung unter Berücksichtigung von Material-, Informationsfluss und der eingesetzten Ressourcen erforderlich. In diesem Aufsatz werden verschiedene, häufig verwendete Methoden zur Prozessdarstellung diesbezüglich miteinander verglichen und bewertet. Die verschiedenen Stärken und Schwächen werden in Form eines Benchmarks zusammengefasst, das als Grundlage für eine neue Methode dient, die im Rahmen des IGF-Forschungsprojekts 16187 N/1 erarbeitet wurde.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Paper 1: Pilot study of Swiss firms Abstract Using a fixed effects approach, we investigate whether the presence of specific individuals on Swiss firms’ boards affects firm performance and the policy choices they make. We find evidence for a substantial impact of these directors’ presence on their firms. Moreover, the director effects are correlated across policies and performance measures but uncorrelated to the directors’ background. We find these results interesting but conclude that they should to be substantiated on a dataset that is larger and better understood by researchers. Also, further tests are required to rule out methodological concerns. Paper 2: Evidence from the S&P 1,500 Abstract We ask whether directors on corporate boards contribute to firm performance as individuals. From the universe of the S&P 1,500 firms since 1996 we track 2,062 directors who serve on multiple boards over extended periods of time. Our initial findings suggest that the presence of these directors is associated with substantial performance shifts (director fixed effects). Closer examination shows that these effects are statistical artifacts and we conclude that directors are largely fungible. Moreover, we contribute to the discussion of the fixed effects method. In particular, we highlight that the selection of the randomization method is pivotal when generating placebo benchmarks. Paper 3: Robustness, statistical power, and important directors Abstract This article provides a better understanding of Senn’s (2014) findings: The outcome that individual directors are unrelated to firm performance proves robust against different estimation models and testing strategies. By looking at CEOs, the statistical power of the placebo benchmarking test is evaluated. We find that only the stronger tests are able to detect CEO fixed effects. However, these tests are not suitable to analyze directors. The suitable tests would detect director effects if the inter quartile range of the true effects amounted to 3 percentage points ROA. As Senn (2014) finds no such effects for outside directors in general, we focus on groups of particularly important directors (e.g., COBs, non-busy directors, successful directors). Overall, our evidence suggests that the members of these groups are not individually associated with firm performance either. Thus, we confirm that individual directors are largely fungible. If the individual has an effect on performance, it is of small magnitude.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all subregions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.