846 resultados para Post-Operating Performance
Resumo:
There is a national need to increase the STEM-related workforce. Among factors leading towards STEM careers include the number of advanced high school mathematics and science courses students complete. Florida's enrollment patterns in STEM-related Advanced Placement (AP) courses, however, reveal that only a small percentage of students enroll into these classes. Therefore, screening tools are needed to find more students for these courses, who are academically ready, yet have not been identified. The purpose of this study was to investigate the extent to which scores from a national standardized test, Preliminary Scholastic Assessment Test/ National Merit Qualifying Test (PSAT/NMSQT), in conjunction with and compared to a state-mandated standardized test, Florida Comprehensive Assessment Test (FCAT), are related to selected AP exam performance in Seminole County Public Schools. An ex post facto correlational study was conducted using 6,189 student records from the 2010 - 2012 academic years. Multiple regression analyses using simultaneous Full Model testing showed differential moderate to strong relationships between scores in eight of the nine AP courses (i.e., Biology, Environmental Science, Chemistry, Physics B, Physics C Electrical, Physics C Mechanical, Statistics, Calculus AB and BC) examined. For example, the significant unique contribution to overall variance in AP scores was a linear combination of PSAT Math (M), Critical Reading (CR) and FCAT Reading (R) for Biology and Environmental Science. Moderate relationships for Chemistry included a linear combination of PSAT M, W (Writing) and FCAT M; a combination of FCAT M and PSAT M was most significantly associated with Calculus AB performance. These findings have implications for both research and practice. FCAT scores, in conjunction with PSAT scores, can potentially be used for specific STEM-related AP courses, as part of a systematic approach towards AP course identification and placement. For courses with moderate to strong relationships, validation studies and development of expectancy tables, which estimate the probability of successful performance on these AP exams, are recommended. Also, findings established a need to examine other related research issues including, but not limited to, extensive longitudinal studies and analyses of other available or prospective standardized test scores.
Resumo:
This study examined the interaction of age, attitude, and performance within the context of an interactive computer testing experience. Subjects were 13 males and 47 females between the ages of 55 and 82, with a minimum of a high school education. Initial attitudes toward computers, as measured by the Cybernetics Attitude Scale (CAS), demonstrated overall equivalence between these older subjects and previously tested younger subjects. Post-intervention scores on the CAS indicated that attitudes toward computers were unaffected by either a "fun" or a "challenging" computer interaction experience. The differential effects of a computerized vs. a paperand- pencil presentation format of a 20-item, multiple choice vocabulary test were examined. Results indicated no significant differences in the performance of subjects in the two conditions, and no interaction effect between attitude and performance. These findings suggest that the attitudes of older adults towards computers do not affect their computerized testing performance, at least for short term testing of verbal abilities. A further implication is that, under the conditions presented here, older subjects appear to be unaffected by mode of testing. The impact of recent advanced in technology on older adults is discussed.
Resumo:
Existing instrumental techniques must be adaptable to the analysis of novel explosives if science is to keep up with the practices of terrorists and criminals. The focus of this work has been the development of analytical techniques for the analysis of two types of novel explosives: ascorbic acid-based propellants, and improvised mixtures of concentrated hydrogen peroxide/fuel. In recent years, the use of these explosives in improvised explosive devices (IEDs) has increased. It is therefore important to develop methods which permit the identification of the nature of the original explosive from post-blast residues. Ascorbic acid-based propellants are low explosives which employ an ascorbic acid fuel source with a nitrate/perchlorate oxidizer. A method which utilized ion chromatography with indirect photometric detection was optimized for the analysis of intact propellants. Post-burn and post-blast residues if these propellants were analyzed. It was determined that the ascorbic acid fuel and nitrate oxidizer could be detected in intact propellants, as well as in the post-burn and post-blast residues. Degradation products of the nitrate and perchlorate oxidizers were also detected. With a quadrupole time-of-flight mass spectrometer (QToFMS), exact mass measurements are possible. When an HPLC instrument is coupled to a QToFMS, the combination of retention time with accurate mass measurements, mass spectral fragmentation information, and isotopic abundance patterns allows for the unequivocal identification of a target analyte. An optimized HPLC-ESI-QToFMS method was applied to the analysis of ascorbic acid-based propellants. Exact mass measurements were collected for the fuel and oxidizer anions, and their degradation products. Ascorbic acid was detected in the intact samples and half of the propellants subjected to open burning; the intact fuel molecule was not detected in any of the post-blast residue. Two methods were optimized for the analysis of trace levels of hydrogen peroxide: HPLC with fluorescence detection (HPLC-FD), and HPLC with electrochemical detection (HPLC-ED). Both techniques were extremely selective for hydrogen peroxide. Both methods were applied to the analysis of post-blast debris from improvised mixtures of concentrated hydrogen peroxide/fuel; hydrogen peroxide was detected on variety of substrates. Hydrogen peroxide was detected in the post-blast residues of the improvised explosives TATP and HMTD.
Resumo:
Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.
Resumo:
Circadian rhythms, patterns of each twenty-four hour period, are found in most bodily functions. The biological cycles of between 20 and 28 hours have a profound effect on an individual's mood, level of performance, and physical well being. Loss of synchrony of these biological rhythms occurs with hospitalization, surgery and anesthesia. The purpose of this comparative, correlational study was to determine the effects of circadian rhythm disruption in post-surgical recovery. Data were collected during the pre-operative and post-operative periods in the following indices: body temperature, blood pressure, heart rate, urine cortisol level and locomotor activity. The data were analyzed by cosinor analysis for evidence of circadian rhythmicity and disruptions throughout the six day study period which encompassed two days pre-operatively, two days post-operatively, and two days after hospital discharge. The sample consisted of five men and five women who served as their own pre-surgical control. The surgical procedures were varied. Findings showed evidence of circadian disruptions in all subjects post-operatively, lending support for the hypotheses.
Resumo:
This work is the result of a master’s program research developed in the post graduation program in music at the Music School of the UFRN ( Federal University of Rio Grande do Norte) under the orientation of Dr. Andre Luiz Muniz de Oliveira which aimed at making a reflexion about regency gestures and its implications about the objective and subjective elements of the performance connected to a number of regency tools concieved in accordance with the tradition of historic music. As a tool for gestual analysis we’ve used the Harold Farbermann PatternCube method. We’ve used videos from conductors Pierre Boulez e Valery Gergiev, both conducting Igor Stravinsky’s Rite of Spring. In the analysis of the videos we’ve observed the technical use of the gestual aparatus instead of the use of musical gestures fundamented in Hatten. The research showed us the importance of the use of analitical tools in helping subsidise a direction in performance in regency.
Resumo:
Despite the numerous advantages resulting from the use of membrane filters technology, intrinsic limitations fouling process become relevant to its applicability. The control of operating conditions is an important tool to mitigate fouling and achieve good levels of efficiency. In this sense, the objective of this study was to investigate the effect of transmembrane pressure and concentrate flow in the performance of ultrafiltration, applied to the post-treatment of domestic sewage. The process was evaluated and optimized by varying the pressure (0.5 and 1.5 bar) and the concentrate flow (300 and 600 L/h), using a 22 factorial design, in order to investigate the effects on the permeate flow and quality of effluents generated at each operating condition. We evaluated the following quality indicators for permeate: pH, electrical conductivity, total suspended solids, turbidity, calcium and Chemical Oxygen Demand (COD). In all tests, we observed marked reduction in the permeate flux at the early stages, followed by a slow decline that lasted until it reaches a relatively constant level, around 120 minutes of filtration. The increased pressure resulted in a higher initial permeate flux, but the decrease of the flow with time is greater for tests at higher pressure, indicating a more pronounced fouling process. On the other hand, increasing the concentrate flow resulted in a slower decline in permeate flux with the filtration time. Regarding the quality of permeate, the transmembrane pressure of 0,5 bar was the one that allowed better results, and was statistically confirmed through the two-way ANOVA test with repeated measures, significant effect of pressure on the turbidity of the permeate. The concentrate flow, in turn, showed no significant influence on any of the quality parameters. Thus, we conclude that, from an economic and environmental point of view, it is more interesting to operate ultrafiltration membrane system with a lower concentrate flow associated with a low transmembrane pressure, since under these conditions will produce less waste, and the permeate will present lower concentrations of the analyzed constituent, especially lower turbidity.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A paradigma kifejezést Thomas Kuhn honosította meg a tudományfilozófiában: így nevezte el azt a sajátos szemléletmódot, ahogyan egy kutatási irányzat rátekint vizsgálata tárgyára. Azonos paradigmát használó kutatók hasonló kérdésekre keresik a választ, hasonló módszereket és fogalmakat alkalmaznak. A szerző 1999-ben publikált cikkében vezette be a "rendszerparadigma" kifejezést, amely a társadalomban működő rendszereket helyezi a vizsgálat középpontjába. A tanulmány a posztszocialista átalakulás során szerzett tapasztalatok alapján fejleszti tovább a korábbi cikkben kifejtett elméleti gondolatokat. Az első rész a szocialista és a kapitalista rendszert hasonlítja össze; leírja fő jellemzőiket, majd megállapítja, hogy Észak-Korea és Kuba kivételével az egykori szocialista országokban meghonosodott a kapitalista rendszer. A második rész a politikai-kormányzati formák szerint tipologizálja a kapitalizmus változatait. Három markáns típust különböztet meg: a demokráciát, az autokráciát és a diktatúrát. Huntington a demokratizálás harmadik hullámáról írt. A tanulmány arra a következtetésre jut, hogy ez a hullám elapadt, a 47 posztszocialista ország lakosságának mindössze egytizede él demokráciában, a többiben autokrácia vagy diktatúra uralkodik. A harmadik rész Magyarországra alkalmazza a kialakított fogalmi és elemzési apparátust: itt kapitalizmus van, a politikai-kormányzati forma autokrácia - lényeges közös jellemzők mutathatók ki más kapitalista országokkal, illetve más autokráciákkal. Ez összefér azzal a felismeréssel, hogy egyes - nem alapvető jelentőségű - vonások egyediek, "hungarikumok", különböznek minden más ország tulajdonságaitól. _____ The expression paradigm, introduced into the philosophy of science by Thomas Kuhn for the way a research trend views the subject examined, denotes a case where researchers pursue similar questions by similar methods with similar concepts. The author introduced the expression system paradigm" in a 1999 article centred on the systems operating in society. This paper takes those theoretical ideas further, based on experience in the post-socialist transformation. The first part compares the socialist and capitalist systems and their main features, establishing that all former socialist countries but North Korea and Cuba have embraced the capitalist system. The second adds a typology of the varieties of capitalism by politico-governmental form, marking three types: democracy, autocracy and dictatorship. Huntington writes of a third wave of democratization, which this study concludes has ceased. Democracy reigns in only 10 per cent of the 47 post-socialist countries, autocracy or dictatorship in the others. The third part applies this conceptual and analytical framework to Hungary, where capitalism prevails, with autocracy as its politico-governmental form. It shows strongly similar features to other capitalist countries and other autocracies. This is compatible with recognizing that some features of less than fundamental importance are specific to Hungary and differ from those elsewhere.
Resumo:
Purpose: The purpose of this paper is to ascertain how today’s international marketers can perform better on the global scene by harnessing spontaneity. Design/methodology/approach: The authors draw on contingency theory to develop a model of the spontaneity – international marketing performance relationship, and identify three potential moderators, namely, strategic planning, centralization, and market dynamism. The authors test the model via structural equation modeling with survey data from 197 UK exporters. Findings: The results indicate that spontaneity is beneficial to exporters in terms of enhancing profit performance. In addition, greater centralization and strategic planning strengthen the positive effects of spontaneity. However, market dynamism mitigates the positive effect of spontaneity on export performance (when customer needs are volatile, spontaneous decisions do not function as well in terms of ensuring success). Practical implications: Learning to be spontaneous when making export decisions appears to result in favorable outcomes for the export function. To harness spontaneity, export managers should look to develop company heuristics (increase centralization and strategic planning). Finally, if operating in dynamic export market environments, the role of spontaneity is weaker, so more conventional decision-making approaches should be adopted. Originality/value: The international marketing environment typically requires decisions to be flexible and fast. In this context, spontaneity could enable accelerated and responsive decision-making, allowing international marketers to realize superior performance. Yet, there is a lack of research on decision-making spontaneity and its potential for international marketing performance enhancement.
Resumo:
Carbon Capture and Storage (CCS) technologies provide a means to significantly reduce carbon emissions from the existing fleet of fossil-fired plants, and hence can facilitate a gradual transition from conventional to more sustainable sources of electric power. This is especially relevant for coal plants that have a CO2 emission rate that is roughly two times higher than that of natural gas plants. Of the different kinds of CCS technology available, post-combustion amine based CCS is the best developed and hence more suitable for retrofitting an existing coal plant. The high costs from operating CCS could be reduced by enabling flexible operation through amine storage or allowing partial capture of CO2 during high electricity prices. This flexibility is also found to improve the power plant’s ramp capability, enabling it to offset the intermittency of renewable power sources. This thesis proposes a solution to problems associated with two promising technologies for decarbonizing the electric power system: the high costs of the energy penalty of CCS, and the intermittency and non-dispatchability of wind power. It explores the economic and technical feasibility of a hybrid system consisting of a coal plant retrofitted with a post-combustion-amine based CCS system equipped with the option to perform partial capture or amine storage, and a co-located wind farm. A techno-economic assessment of the performance of the hybrid system is carried out both from the perspective of the stakeholders (utility owners, investors, etc.) as well as that of the power system operator.
In order to perform the assessment from the perspective of the facility owners (e.g., electric power utilities, independent power producers), an optimal design and operating strategy of the hybrid system is determined for both the amine storage and partial capture configurations. A linear optimization model is developed to determine the optimal component sizes for the hybrid system and capture rates while meeting constraints on annual average emission targets of CO2, and variability of the combined power output. Results indicate that there are economic benefits of flexible operation relative to conventional CCS, and demonstrate that the hybrid system could operate as an energy storage system: providing an effective pathway for wind power integration as well as a mechanism to mute the variability of intermittent wind power.
In order to assess the performance of the hybrid system from the perspective of the system operator, a modified Unit Commitment/ Economic Dispatch model is built to consider and represent the techno-economic aspects of operation of the hybrid system within a power grid. The hybrid system is found to be effective in helping the power system meet an average CO2 emissions limit equivalent to the CO2 emission rate of a state-of-the-art natural gas plant, and to reduce power system operation costs and number of instances and magnitude of energy and reserve scarcity.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
The purpose of this study was to assess the effect of performance feedback on Athletic Trainers’ (ATs) perceived knowledge (PK) and likelihood to pursue continuing education (CE). The investigation was grounded in the theories of “the definition of the situation” (Thomas & Thomas, 1928) and the “illusion of knowing,” (Glenberg, Wilkinson, & Epstein, 1982) suggesting that PK drives behavior. This investigation measured the degree to which knowledge gap predicted CE seeking behavior by providing performance feedback designed to change PK. A pre-test post-test control-group design was used to measure PK and likelihood to pursue CE before and after assessing actual knowledge. ATs (n=103) were randomly sampled and assigned to two groups, with and without performance feedback. Two independent samples t-tests were used to compare groups on the difference scores of the dependent variables. Likelihood to pursue CE was predicted by three variables using multiple linear regression: perceived knowledge, pre-test likelihood to pursue CE, and knowledge gap. There was a 68.4% significant difference (t101= 2.72, p=0.01, ES=0.45) between groups in the change scores for likelihood to pursue CE because of the performance feedback (Experimental group=13.7% increase; Control group= 4.3% increase). The strongest relationship among the dependent variables was between pre-test and post-test measures of likelihood to pursue CE (F2,102=56.80, p<0.01, r=0.73, R2=0.53). The pre- and post-test predictive relationship was enhanced when group was included in the model. In this model [YCEpost=0.76XCEpre-0.34 Xgroup+2.24+E], group accounted for a significant amount of unique variance in predicting CE while the pre-test likelihood to pursue CE variable was held constant (F3,102=40.28, p<0.01,: r=0.74, R2=0.55). Pre-test knowledge gap, regardless of group allocation, was a linear predictor of the likelihood to pursue CE (F1,102=10.90, p=.01, r=.31, R2=.10). In this investigation, performance feedback significantly increased participants’ likelihood to pursue CE. Pre-test knowledge gap was a significant predictor of likelihood to pursue CE, regardless if performance feedback was provided. ATs may have self-assessed and engaged in internal feedback as a result of their test-taking experience. These findings indicate that feedback, both internal and external, may be necessary to trigger CE seeking behavior.
Resumo:
This study looks at the impact of the recent financial crisis on the short-term performance of European acquisitions. We use institutional theory and transaction cost economic theory to study whether bidders derive lower or higher returns from acquisitions announced after 2008. We investigate shareholders’ stock price reaction to 2245 deals which occurred during 2004–12 across 22 European Union countries. Our results from both univariate and multivariate analysis show that the deals announced in the post-crisis period, corresponding to the period of economic recession, generate higher returns to shareholders as compared to acquisitions announced in the pre-crisis period. We also test the relevance of the Economic and Monetary Union (EMU), that is, the Eurozone, to this value accrual during the recessionary period. We observe that non-EMU transactions obtain significantly higher gains vis-à-vis EMU transactions in the post-crisis years. Overall, announcement returns of European acquisitions have been affected by the financial crisis and the global recession; and companies that target countries with different currency regimes are likely to generate better returns from their acquisitions.