913 resultados para ME and EP method
Resumo:
Purpose – The purpose of this paper is to develop an integrated patient-focused analytical framework to improve quality of care in accident and emergency (A&E) unit of a Maltese hospital. Design/methodology/approach – The study adopts a case study approach. First, a thorough literature review has been undertaken to study the various methods of healthcare quality management. Second, a healthcare quality management framework is developed using combined quality function deployment (QFD) and logical framework approach (LFA). Third, the proposed framework is applied to a Maltese hospital to demonstrate its effectiveness. The proposed framework has six steps, commencing with identifying patients’ requirements and concluding with implementing improvement projects. All the steps have been undertaken with the involvement of the concerned stakeholders in the A&E unit of the hospital. Findings – The major and related problems being faced by the hospital under study were overcrowding at A&E and shortage of beds, respectively. The combined framework ensures better A&E services and patient flow. QFD identifies and analyses the issues and challenges of A&E and LFA helps develop project plans for healthcare quality improvement. The important outcomes of implementing the proposed quality improvement programme are fewer hospital admissions, faster patient flow, expert triage and shorter waiting times at the A&E unit. Increased emergency consultant cover and faster first significant medical encounter were required to start addressing the problems effectively. Overall, the combined QFD and LFA method is effective to address quality of care in A&E unit. Practical/implications – The proposed framework can be easily integrated within any healthcare unit, as well as within entire healthcare systems, due to its flexible and user-friendly approach. It could be part of Six Sigma and other quality initiatives. Originality/value – Although QFD has been extensively deployed in healthcare setup to improve quality of care, very little has been researched on combining QFD and LFA in order to identify issues, prioritise them, derive improvement measures and implement improvement projects. Additionally, there is no research on QFD application in A&E. This paper bridges these gaps. Moreover, very little has been written on the Maltese health care system. Therefore, this study contributes demonstration of quality of emergency care in Malta.
Resumo:
In the paper, we construct a composite indicator to estimate the potential of four Central and Eastern European countries (the Czech Republic, Hungary, Poland and Slovakia) to benefit from productivity spillovers from foreign direct investment (FDI) in the manufacturing sector. Such transfers of technology are one of the main benefits of FDI for the host country, and should also be one of the main determinants of FDI incentives offered to investing multinationals by governments, but they are difficult to assess ex ante. For our composite index, we use six components to proxy the main channels and determinants of these spillovers. We have tried several weighting and aggregation methods, and we consider our results robust. According to the analysis of our results, between 2003 and 2007 all four countries were able to increase their potential to benefit from such spillovers, although there are large differences between them. The Czech Republic clearly has the most potential to benefit from productivity spillovers, while Poland has the least. The relative positions of Hungary and Slovakia depend to some extent on the exact weighting and aggregation method of the individual components of the index, but the differences are not large. These conclusions have important implication both the investment strategies of multinationals and government FDI policies.
Resumo:
In this paper, we construct a composite indicator to estimate the potential of four Central and Eastern European countries (the Czech Republic, Hungary, Poland and Slovakia) to benefit from productivity spillovers from foreign direct investment (FDI) in the manufacturing sector. Such transfers of technology are one of the main benefits of FDI for the host country, and should also be one of the main determinants of FDI incentives offered to investing multinationals by governments, but they are difficult to assess ex ante. For our composite index, we use six components to proxy the main channels and determinants of these spillovers. We have tried several weighting and aggregation methods, and we consider our results robust. According to the analysis of our results, between 2003 and 2007 all four countries were able to increase their potential to benefit from such spillovers, although there are large differences between them. The Czech Republic clearly has the most potential to benefit from productivity spillovers, while Poland has the least. The relative positions of Hungary and Slovakia depend to some extent on the exact weighting and aggregation method of the individual components of the index, but the differences are not large. These conclusions have important implications both for the investment strategies of multinationals and government FDI policies.
Resumo:
This study examined the effects of computer assisted instruction (CAI) 1 hour per week for 18 weeks on changes in computational scores and attitudes of developmental mathematics students at schools with predominantly Black enrollment. Comparisons were made between students using CAI with differing software--PLATO, CSR or both together--and students using traditional instruction (TI) only.^ This study was conducted in the Dade County Public School System from February through June 1991, at two senior high schools. The dependent variables, the State Student Assessment Test (SSAT), and the School Subjects Attitude Scales (SSAS), measured students' computational scores and attitudes toward mathematics in 3 categories: interest, usefulness, and difficulty, respectively.^ Univariate analyses of variance were performed on the least squares mean differences from pretest to posttest for testing main effects and interactions. A t-test measured significant main effects and interactions. Results were interpreted at the.01 level of significance.^ Null hypotheses 1, 2, and 3 compared versions of CAI with the control group, for changes in mathematical computation scores measured with the SSAT. It could not be concluded that changes in standardized mathematics test scores of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were significantly higher than changes in test scores for students receiving TI only.^ Null hypotheses 4, 5, and 6 tested the effects of CAI for attitudes toward mathematics for experimental groups against control groups measured with the SSAS. Changes in attitudes toward mathematics of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were not significantly higher than attitude changes for students receiving TI only.^ Teacher effect on students' computational scores was a more influential variable than CAI. No interaction was found between gender and learning method on standardized mathematics test scores (null hypothesis 7). ^
Resumo:
The potential of solid phase microextraction (SPME) in the analysis of explosives is demonstrated. A sensitive, rapid, solventless and inexpensive method for the analysis of explosives and explosive odors from solid and liquid samples has been optimized using SPME followed by HPLC and GC/ECD. SPME involves the extraction of the organic components in debris samples into sorbent-coated silica fibers, which can be transferred directly to the injector of a gas chromatograph. SPME/HPLC requires a special desorption apparatus to elute the extracted analyte onto the column at high pressure. Results for use of GC/ECD is presented and compared to the results gathered by using HPLC analysis. The relative effects of controllable variables including fiber chemistry, adsorption and desorption temperature, extraction time, and desorption time have been optimized for various high explosives. ^
Resumo:
Mediation techniques provide interoperability and support integrated query processing among heterogeneous databases. While such techniques help data sharing among different sources, they increase the risk for data security, such as violating access control rules. Successful protection of information by an effective access control mechanism is a basic requirement for interoperation among heterogeneous data sources. ^ This dissertation first identified the challenges in the mediation system in order to achieve both interoperability and security in the interconnected and collaborative computing environment, which includes: (1) context-awareness, (2) semantic heterogeneity, and (3) multiple security policy specification. Currently few existing approaches address all three security challenges in mediation system. This dissertation provides a modeling and architectural solution to the problem of mediation security that addresses the aforementioned security challenges. A context-aware flexible authorization framework was developed in the dissertation to deal with security challenges faced by mediation system. The authorization framework consists of two major tasks, specifying security policies and enforcing security policies. Firstly, the security policy specification provides a generic and extensible method to model the security policies with respect to the challenges posed by the mediation system. The security policies in this study are specified by 5-tuples followed by a series of authorization constraints, which are identified based on the relationship of the different security components in the mediation system. Two essential features of mediation systems, i. e., relationship among authorization components and interoperability among heterogeneous data sources, are the focus of this investigation. Secondly, this dissertation supports effective access control on mediation systems while providing uniform access for heterogeneous data sources. The dynamic security constraints are handled in the authorization phase instead of the authentication phase, thus the maintenance cost of security specification can be reduced compared with related solutions. ^
Resumo:
This dissertation develops a new figure of merit to measure the similarity (or dissimilarity) of Gaussian distributions through a novel concept that relates the Fisher distance to the percentage of data overlap. The derivations are expanded to provide a generalized mathematical platform for determining an optimal separating boundary of Gaussian distributions in multiple dimensions. Real-world data used for implementation and in carrying out feasibility studies were provided by Beckman-Coulter. It is noted that although the data used is flow cytometric in nature, the mathematics are general in their derivation to include other types of data as long as their statistical behavior approximate Gaussian distributions. ^ Because this new figure of merit is heavily based on the statistical nature of the data, a new filtering technique is introduced to accommodate for the accumulation process involved with histogram data. When data is accumulated into a frequency histogram, the data is inherently smoothed in a linear fashion, since an averaging effect is taking place as the histogram is generated. This new filtering scheme addresses data that is accumulated in the uneven resolution of the channels of the frequency histogram. ^ The qualitative interpretation of flow cytometric data is currently a time consuming and imprecise method for evaluating histogram data. This method offers a broader spectrum of capabilities in the analysis of histograms, since the figure of merit derived in this dissertation integrates within its mathematics both a measure of similarity and the percentage of overlap between the distributions under analysis. ^
Resumo:
A comprehensive investigation of sensitive ecosystems in South Florida with the main goal of determining the identity, spatial distribution, and sources of both organic biocides and trace elements in different environmental compartments is reported. This study presents the development and validation of a fractionation and isolation method of twelve polar acidic herbicides commonly applied in the vicinity of the study areas, including e.g. 2,4-D, MCPA, dichlorprop, mecroprop, picloram in surface water. Solid phase extraction (SPE) was used to isolate the analytes from abiotic matrices containing large amounts of dissolved organic material. Atmospheric-pressure ionization (API) with electrospray ionization in negative mode (ESP-) in a Quadrupole Ion Trap mass spectrometer was used to perform the characterization of the herbicides of interest. ^ The application of Laser Ablation-ICP-MS methodology in the analysis of soils and sediments is reported in this study. The analytical performance of the method was evaluated on certified standards and real soil and sediment samples. Residential soils were analyzed to evaluate feasibility of using the powerful technique as a routine and rapid method to monitor potential contaminated sites. Forty eight sediments were also collected from semi pristine areas in South Florida to conduct screening of baseline levels of bioavailable elements in support of risk evaluation. The LA-ICP-MS data were used to perform a statistical evaluation of the elemental composition as a tool for environmental forensics. ^ A LA-ICP-MS protocol was also developed and optimized for the elemental analysis of a wide range of elements in polymeric filters containing atmospheric dust. A quantitative strategy based on internal and external standards allowed for a rapid determination of airborne trace elements in filters containing both contemporary African dust and local dust emissions. These distributions were used to qualitative and quantitative assess differences of composition and to establish provenance and fluxes to protected regional ecosystems such as coral reefs and national parks. ^
Resumo:
In this study, an Atomic Force Microscopy (AFM) roughness analysis was performed on non-commercial Nitinol alloys with Electropolished (EP) and Magneto-Electropolished (MEP) surface treatments and commercially available stents by measuring Root-Mean-Square (RMS) , Average Roughness (Ra), and Surface Area (SA) values at various dimensional areas on the alloy surfaces, ranging from (800 x 800 nm) to (115 x 115µm), and (800 x 800 nm) to (40 x 40 µm) on the commercial stents. Results showed that NiTi-Ta 10 wt% with an EP surface treatment yielded the highest overall roughness, while the NiTi-Cu 10 wt% alloy had the lowest roughness when analyzed over (115 x 115 µm). Scanning Electron Microscopy (SEM) and Energy Dispersive Spectroscopy (EDS) analysis revealed unique surface morphologies for surface treated alloys, as well as an aggregation of ternary elements Cr and Cu at grain boundaries in MEP and EP surface treated alloys, and non-surface treated alloys. Such surface micro-patterning on ternary Nitinol alloys could increase cellular adhesion and accelerate surface endothelialization of endovascular stents, thus reducing the likelihood of in-stent restenosis and provide insight into hemodynamic flow regimes and the corrosion behavior of an implantable device influenced from such surface micro-patterns.
Resumo:
Weakly electric fish produce a dual function electric signal that makes them ideal models for the study of sensory computation and signal evolution. This signal, the electric organ discharge (EOD), is used for communication and navigation. In some families of gymnotiform electric fish, the EOD is a dynamic signal that increases in amplitude during social interactions. Amplitude increase could facilitate communication by increasing the likelihood of being sensed by others or by impressing prospective mates or rivals. Conversely, by increasing its signal amplitude a fish might increase its sensitivity to objects by lowering its electrolocation detection threshold. To determine how EOD modulations elicited in the social context affect electrolocation, I developed an automated and fast method for measuring electroreception thresholds using a classical conditioning paradigm. This method employs a moving shelter tube, which these fish occupy at rest during the day, paired with an electrical stimulus. A custom built and programmed robotic system presents the electrical stimulus to the fish, slides the shelter tube requiring them to follow, and records video of their movements. I trained the electric fish of the genus Sternopygus was trained to respond to a resistive stimulus on this apparatus in 2 days. The motion detection algorithm correctly identifies the responses 91% of the time, with a false positive rate of only 4%. This system allows for a large number of trials, decreasing the amount of time needed to determine behavioral electroreception thresholds. This novel method enables the evaluation the evolutionary interplay between two conflicting sensory forces, social communication and navigation.
Resumo:
Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^
Resumo:
Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87–100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling's T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.
Resumo:
In the article - Menu Analysis: Review and Evaluation - by Lendal H. Kotschevar, Distinguished Professor School of Hospitality Management, Florida International University, Kotschevar’s initial statement reads: “Various methods are used to evaluate menus. Some have quite different approaches and give different information. Even those using quite similar methods vary in the information they give. The author attempts to describe the most frequently used methods and to indicate their value. A correlation calculation is made to see how well certain of these methods agree in the information they give.” There is more than one way to look at the word menu. The culinary selections decided upon by the head chef or owner of a restaurant, which ultimately define the type of restaurant is one way. The physical outline of the food, which a patron actually holds in his or her hand, is another. These descriptions are most common to the word, menu. The author primarily concentrates on the latter description, and uses the act of counting the number of items sold on a menu to measure the popularity of any particular item. This, along with a formula, allows Kotschevar to arrive at a specific value per item. Menu analysis would appear a difficult subject to broach. How does a person approach a menu analysis, how do you qualify and quantify a menu; it seems such a subjective exercise. The author offers methods and outlines on approaching menu analysis from empirical perspectives. “Menus are often examined visually through the evaluation of various factors. It is a subjective method but has the advantage of allowing scrutiny of a wide range of factors which other methods do not,” says Distinguished Professor, Kotschevar. “The method is also highly flexible. Factors can be given a score value and scores summed to give a total for a menu. This allows comparison between menus. If the one making the evaluations knows menu values, it is a good method of judgment,” he further offers. The author wants you to know that assigning values is fundamental to a pragmatic menu analysis; it is how the reviewer keeps score, so to speak. Value merit provides reliable criteria from which to gauge a particular menu item. In the final analysis, menu evaluation provides the mechanism for either keeping or rejecting selected items on a menu. Kotschevar provides at least three different matrix evaluation methods; they are defined as the Miller method, the Smith and Kasavana method, and the Pavesic method. He offers illustrated examples of each via a table format. These are helpful tools since trying to explain the theories behind the tables would be difficult at best. Kotschevar also references examples of analysis methods which aren’t matrix based. The Hayes and Huffman - Goal Value Analysis - is one such method. The author sees no one method better than another, and suggests that combining two or more of the methods to be a benefit.
Resumo:
Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^
Resumo:
This study examines the effect of edible coatings, type of oil used, and cooking method on the fat content of commercially available French fries. In contrast to earlier studies that examined laboratory prepared French fries, this study assesses commercially available French fries and cooking oils. This study also measured the fat content in oven baked French fries, comparing the two cooking methods in addition to the comparisons of different coatings’ oil uptake. The findings of this study were that the type of oil used did have a significant impact on the final oil content of the uncoated and seasoned fries. The fries coated in modified food starch and fried in peanut and soy oils had what appeared to be significantly higher oil content than those fried in corn oil or baked, but the difference was not statistically significant. Additionally, fat content in French fries with hydrocollidial coatings that were prepared in corn oil were not significantly different than French fries with the same coating that were baked.