907 resultados para Multifactor performance measurement


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The gasotransmitter hydrogen sulfide (H2S) is known as an important regulator in several physiological and pathological responses. Among the challenges facing the field is the accurate and reliable measurement of hydrogen sulfide bioavailability. We have reported an approach to discretely measure sulfide and sulfide pools using the monobromobimane (MBB) method coupled with reversed phase high-performance liquid chromatography (RP-HPLC). The method involves the derivatization of sulfide with excess MBB under precise reaction conditions at room temperature to form sulfide dibimane (SDB). The resultant fluorescent SDB is analyzed by RP-HPLC using fluorescence detection with the limit of detection for SDB (2 nM). Care must be taken to avoid conditions that may confound H2S measurement with this method. Overall, RP-HPLC with fluorescence detection of SDB is a useful and powerful tool to measure biological sulfide levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Backscatter communication is an emerging wireless technology that recently has gained an increase in attention from both academic and industry circles. The key innovation of the technology is the ability of ultra-low power devices to utilize nearby existing radio signals to communicate. As there is no need to generate their own energetic radio signal, the devices can benefit from a simple design, are very inexpensive and are extremely energy efficient compared with traditional wireless communication. These benefits have made backscatter communication a desirable candidate for distributed wireless sensor network applications with energy constraints.

The backscatter channel presents a unique set of challenges. Unlike a conventional one-way communication (in which the information source is also the energy source), the backscatter channel experiences strong self-interference and spread Doppler clutter that mask the information-bearing (modulated) signal scattered from the device. Both of these sources of interference arise from the scattering of the transmitted signal off of objects, both stationary and moving, in the environment. Additionally, the measurement of the location of the backscatter device is negatively affected by both the clutter and the modulation of the signal return.

This work proposes a channel coding framework for the backscatter channel consisting of a bi-static transmitter/receiver pair and a quasi-cooperative transponder. It proposes to use run-length limited coding to mitigate the background self-interference and spread-Doppler clutter with only a small decrease in communication rate. The proposed method applies to both binary phase-shift keying (BPSK) and quadrature-amplitude modulation (QAM) scheme and provides an increase in rate by up to a factor of two compared with previous methods.

Additionally, this work analyzes the use of frequency modulation and bi-phase waveform coding for the transmitted (interrogating) waveform for high precision range estimation of the transponder location. Compared to previous methods, optimal lower range sidelobes are achieved. Moreover, since both the transmitted (interrogating) waveform coding and transponder communication coding result in instantaneous phase modulation of the signal, cross-interference between localization and communication tasks exists. Phase discriminating algorithm is proposed to make it possible to separate the waveform coding from the communication coding, upon reception, and achieve localization with increased signal energy by up to 3 dB compared with previous reported results.

The joint communication-localization framework also enables a low-complexity receiver design because the same radio is used both for localization and communication.

Simulations comparing the performance of different codes corroborate the theoretical results and offer possible trade-off between information rate and clutter mitigation as well as a trade-off between choice of waveform-channel coding pairs. Experimental results from a brass-board microwave system in an indoor environment are also presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research into the dynamicity of job performance criteria has found evidence suggesting the presence of rank-order changes to job performance scores across time as well as intraindividual trajectories in job performance scores across time. These findings have influenced a large body of research into (a) the dynamicity of validities of individual differences predictors of job performance and (b) the relationship between individual differences predictors of job performance and intraindividual trajectories of job performance. In the present dissertation, I addressed these issues within the context of the Five Factor Model of personality. The Five Factor Model is arranged hierarchically, with five broad higher-order factors subsuming a number of more narrowly tailored personality facets. Research has debated the relative merits of broad versus narrow traits for predicting job performance, but the entire body of research has addressed the issue from a static perspective -- by examining the relative magnitude of validities of global factors versus their facets. While research along these lines has been enlightening, theoretical perspectives suggest that the validities of global factors versus their facets may differ in their stability across time. Thus, research is needed to not only compare the relative magnitude of validities of global factors versus their facets at a single point in time, but also to compare the relative stability of validities of global factors versus their facets across time. Also necessary to advance cumulative knowledge concerning intraindividual performance trajectories is research into broad vs. narrow traits for predicting such trajectories. In the present dissertation, I addressed these issues using a four-year longitudinal design. The results indicated that the validities of global conscientiousness were stable across time, while the validities of conscientiousness facets were more likely to fluctuate. However, the validities of emotional stability and extraversion facets were no more likely to fluctuate across time than those of the factors. Finally, while some personality factors and facets predicted performance intercepts (i.e., performance at the first measurement occasion), my results failed to indicate a significant effect of any personality variable on performance growth. Implications for research and practice are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the 14th expedition of the research vessel "Meteor" from the 2nd of July to the 7th of August 1968 continously recording instruments for measuring the CO2 partial pressure of seawater and atmospheric CO2 were developped by the Meteorological Institute, University of Frankfurt/M. During the Faroer expedition instrumental constants, such as relative and absolute accuracy, inertia and solvent power were tested. The performance of discontinous analyses of water samples was adopted to shipboard conditiones and correction factors depending on water volume, depth of sampling and water temperature were measured. After having computed average values of the continous records (atmosp. CO2 content, CO2 partial pressure, water temperature) geographical distribution, diurnal variation and dependence of diurnal averages were tested. At four different locations CO2 partial pressure was measured in various depths. During the voyage from the Faroer islands to Helgoland the measured concentrations of atmospheric CO2 content and CO2 partial pressure were tested with respect to a correlation of the geographical latitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissolved CO2 measurements are usually made using a Severinghaus electrode, which is bulky and can suffer from electrical interference. In contrast, optical sensors for gaseous CO2, whilst not suffering these problems, are mainly used for making gaseous (not dissolved) CO2 measurements, due to dye leaching and protonation, especially at high ionic strengths (>0.01 M) and acidity (<pH 4). This is usually prevented by coating the sensor with a gas-permeable, but ion-impermeable, membrane (GPM). Herein, we introduce a highly sensitive, colourimetric-based, plastic film sensor for the measurement of both gaseous and dissolved CO2, in which a pH-sensitive dye, thymol blue (TB) is coated onto particles of hydrophilic silica to create a CO2-sensitive, TB-based pigment, which is then extruded into low density polyethylene (LDPE) to create a GPM-free, i.e. naked, TB plastic sensor film for gaseous and dissolved CO2 measurements. When used for making dissolved CO2 measurements, the hydrophobic nature of the LDPE renders the film: (i) indifferent to ionic strength, (ii) highly resistant to acid attack and (iii) stable when stored under ambient (dark) conditions for >8 months, with no loss of colour or function. Here, the performance of the TB plastic film is primarily assessed as a dissolved CO2 sensor in highly saline (3.5 wt%) water. The TB film is blue in the absence of CO2 and yellow in its presence, exhibiting 50% transition in its colour at ca. 0.18% CO2. This new type of CO2 sensor has great potential in the monitoring of CO2 levels in the hydrosphere, as well as elsewhere, e.g. food packaging and possibly patient monitoring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Innovation is a strategic necessity for the survival of today’s organizations. The wide recognition of innovation as a competitive necessity, particularly in dynamic market environments, makes it an evergreen domain for research. This dissertation deals with innovation in small Information Technology (IT) firms in India. The IT industry in India has been a phenomenal success story of the last three decades, and is today facing a crucial phase in its history characterized by the need for fundamental changes in strategies, driven by innovation. This study, while motivated by the dynamics of changing times, importantly addresses the research gap on small firm innovation in Indian IT.This study addresses three main objectives: (a) drivers of innovation in small IT firms in India (b) impact of innovation on firm performance (c) variation in the extent of innovation adoption in small firms. Product and process innovation were identified as the two most contextually relevant types of innovation for small IT firms. The antecedents of innovation were identified as Intellectual Capital, Creative Capability, Top Management Support, Organization Learning Capability, Customer Involvement, External Networking and Employee Involvement.Survey method was adopted for data collection and the study unit was the firm. Surveys were conducted in 2014 across five South Indian cities. Small firm was defined as one with 10-499 employees. Responses from 205 firms were chosen for analysis. Rigorous statistical analysis was done to generate meaningful insights. The set of drivers of product innovation (Intellectual Capital, Creative Capability, Top Management Support, Customer Involvement, External Networking, and Employee Involvement)were different from that of process innovation (Creative Capability, Organization Learning Capability, External Networking, and Employee Involvement). Both product and process innovation had strong impact on firm performance. It was found that firms that adopted a combination of product innovation and process innovation had the highest levels of firm performance. Product innovation and process innovation fully mediated the relationship between all the seven antecedents and firm performance The results of this study have several important theoretical and practical implications. To the best of the researcher’s knowledge, this is the first time that an empirical study of firm level innovation of this kind has been undertaken in India. A measurement model for product and process innovation was developed, and the drivers of innovation were established statistically. Customer Involvement, External Networking and Employee Involvement are elements of Open Innovation, and all three had strong association with product innovation, and the latter twohad strong association with process innovation. The results showed that proclivity for Open Innovation is healthy in the Indian context. Practical implications have been outlined along how firms can organize themselves for innovation, the human talent for innovation, the right culture for innovation and for open innovation. While some specific examples of possible future studies have been recommended, the researcher believes that the study provides numerous opportunities to further this line of enquiry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physical exercise programmes are routinely prescribed in clinical practice to treat impairments, improve activity and participation in daily life because of their known physiological, health and psychological benefits (RCP, 2009). Progressive resistance exercise is a type of exercise prescribed specifically to improve skeletal muscle strength (Latham et al., 2004). The effectiveness of progressive resistance exercise varies considerably between studies and populations. This thesis focuses on how training parameters influence the delivery of progressive resistance exercise. In order to appropriately evaluate the influence of training parameters, this thesis argues the need to record training performance and the total work completed by participants as prescribed by training protocols. In the first study, participants were taken through a series of protocols differentiated by the intensity and volume of training. Training intensity was defined as a proportion of the mean peak torque achieved during maximal voluntary contractions and was set at 80% and 40% respectively of the MVC mean peak torque. Training volume was defined as the total external work achieved over the training period. Measures of training performance were developed to accurately report the intensity, repetitions and work completed during the training period. A second study evaluated training performance of the training protocols over repeated sessions. These protocols were then applied to 3 stroke survivors. Study 1 found sedentary participants could achieve a differentiated training intensity. Participants completing the high and low intensity protocols trained at 80% and 40% respectively of the MVC mean peak torque. The total work achieved in the high intensity low repetition protocol was lower than the total work achieved in the low intensity high repetition protocol. With repeated practice, study 2 found participants were able to improve in their ability to perform manoeuvres as shown by a reduction in the variation of the mean training intensity achieving total work as specified by the protocol to a lower margin of error. When these protocols were applied to 3 stroke survivors, they were able to achieve the specified training intensity but they were not able to achieve the total work as expected for the protocol. This is likely to be due to an inability in achieving a consistent force throughout the contraction. These results demonstrate evaluation of training characteristics and support the need to record and report training performance characteristics during progressive resistance exercise, including the total work achieved, in order to elucidate the influence of training parameters on progressive resistance exercise. The lack of accurate training performance may partly explain the inconsistencies between studies on optimal training parameters for progressive resistance exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In cognitive tests, animals are often given a choice between two options and obtain a reward if they choose correctly. We investigated whether task format affects subjects' performance in a physical cognition test. In experiment 1, a two-choice memory test, 15 marmosets, Callithrix jacchus, had to remember the location of a food reward over time delays of increasing duration. We predicted that their performance would decline with increasing delay, but this was not found. One possible explanation was that the subjects were not sufficiently motivated to choose correctly when presented with only two options because in each trial they had a 50% chance of being rewarded. In experiment 2, we explored this possibility by testing eight naïve marmosets and seven squirrel monkeys, Saimiri sciureus, with both the traditional two-choice and a new nine-choice version of the memory test that increased the cost of a wrong choice. We found that task format affected the monkeys' performance. When choosing between nine options, both species performed better and their performance declined as delays became longer. Our results suggest that the two-choice format compromises the assessment of physical cognition, at least in memory tests with these New World monkeys, whereas providing more options, which decreases the probability of obtaining a reward when making a random guess, improves both performance and measurement validity of memory. Our findings suggest that two-choice tasks should be used with caution in comparisons within and across species because they are prone to motivational biases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis was threefold, firstly, to compare current player tracking technology in a single game of soccer. Secondly, to investigate the running requirements of elite women’s soccer, in particular the use and application of athlete tracking devices. Finally, how can game style be quantified and defined. Study One compared four different match analysis systems commonly used in both research and applied settings: video-based time-motion analysis, a semi-automated multiple camera based system, and two commercially available Global Positioning System (GPS) based player tracking systems at 1 Hertz (Hz) and 5 Hz respectively. A comparison was made between each of the systems when recording the same game. Total distance covered during the match for the four systems ranged from 10 830 ± 770 m (semi-automated multiple camera based system) to 9 510 ± 740m (video-based time-motion analysis). At running speeds categorised as high-intensity running (>15 km⋅h-1), the semi-automated multiple camera based system reported the highest distance of 2 650 ± 530 m with video-based time-motion analysis reporting the least amount of distance covered with 1 610 ± 370 m. At speeds considered to be sprinting (>20 km⋅h-1), the video-based time-motion analysis reported the highest value (420 ± 170 m) and 1 Hz GPS units the lowest value (230 ± 160 m). These results demonstrate there are differences in the determination of the absolute distances, and that comparison of results between match analysis systems should be made with caution. Currently, there is no criterion measure for these match analysis methods and as such it was not possible to determine if one system was more accurate than another. Study Two provided an opportunity to apply player-tracking technology (GPS) to measure activity profiles and determine the physical demands of Australian international level women soccer players. In four international women’s soccer games, data was collected on a total of 15 Australian women soccer players using a 5 Hz GPS based athlete tracking device. Results indicated that Australian women soccer players covered 9 140 ± 1 030 m during 90 min of play. The total distance covered by Australian women was less than the 10 300 m reportedly covered by female soccer players in the Danish First Division. However, there was no apparent difference in the estimated "#$%&', as measured by multi-stage shuttle tests, between these studies. This study suggests that contextual information, including the “game style” of both the team and opposition may influence physical performance in games. Study Three examined the effect the level of the opposition had on the physical output of Australian women soccer players. In total, 58 game files from 5 Hz athlete-tracking devices from 13 international matches were collected. These files were analysed to examine relationships between physical demands, represented by total distance covered, high intensity running (HIR) and distances covered sprinting, and the level of the opposition, as represented by the Fédération Internationale de Football Association (FIFA) ranking at the time of the match. Higher-ranking opponents elicited less high-speed running and greater low-speed activity compared to playing teams of similar or lower ranking. The results are important to coaches and practitioners in the preparation of players for international competition, and showed that the differing physical demands required were dependent on the level of the opponents. The results also highlighted the need for continued research in the area of integrating contextual information in team sports and demonstrated that soccer can be described as having dynamic and interactive systems. The influence of playing strategy, tactics and subsequently the overall game style was highlighted as playing a significant part in the physical demands of the players. Study Four explored the concept of game style in field sports such as soccer. The aim of this study was to provide an applied framework with suggested metrics for use by coaches, media, practitioners and sports scientists. Based on the findings of Studies 1- 3 and a systematic review of the relevant literature, a theoretical framework was developed to better understand how a team’s game style could be quantified. Soccer games can be broken into key moments of play, and for each of these moments we categorised metrics that provide insight to success or otherwise, to help quantify and measure different methods of playing styles. This study highlights that to date, there had been no clear definition of game style in team sports and as such a novel definition of game style is proposed that can be used by coaches, sport scientists, performance analysts, media and general public. Studies 1-3 outline four common methods of measuring the physical demands in soccer: video based time motion analysis, GPS at 1 Hz and at 5 Hz and semiautomated multiple camera based systems. As there are no semi-automated multiple camera based systems available in Australia, primarily due to cost and logistical reasons, GPS is widely accepted for use in team sports in tracking player movements in training and competition environments. This research identified that, although there are some limitations, GPS player-tracking technology may be a valuable tool in assessing running demands in soccer players and subsequently contribute to our understanding of game style. The results of the research undertaken also reinforce the differences between methods used to analyse player movement patterns in field sports such as soccer and demonstrate that the results from different systems such as GPS based athlete tracking devices and semi-automated multiple camera based systems cannot be used interchangeably. Indeed, the magnitude of measurement differences between methods suggests that significant measurement error is evident. This was apparent even when the same technologies are used which measure at different sampling rates, such as GPS systems using either 1 Hz or 5 Hz frequencies of measurement. It was also recognised that other factors influence how team sport athletes behave within an interactive system. These factors included the strength of the opposition and their style of play. In turn, these can impact the physical demands of players that change from game to game, and even within games depending on these contextual features. Finally, the concept of what is game style and how it might be measured was examined. Game style was defined as "the characteristic playing pattern demonstrated by a team during games. It will be regularly repeated in specific situational contexts such that measurement of variables reflecting game style will be relatively stable. Variables of importance are player and ball movements, interaction of players, and will generally involve elements of speed, time and space (location)".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flapping Wing Aerial Vehicles (FWAVs) have the capability to combine the benefits of both fixed wing vehicles and rotary vehicles. However, flight time is limited due to limited on-board energy storage capacity. For most Unmanned Aerial Vehicle (UAV) operators, frequent recharging of the batteries is not ideal due to lack of nearby electrical outlets. This imposes serious limitations on FWAV flights. The approach taken to extend the flight time of UAVs was to integrate photovoltaic solar cells onto different structures of the vehicle to harvest and use energy from the sun. Integration of the solar cells can greatly improve the energy capacity of an UAV; however, this integration does effect the performance of the UAV and especially FWAVs. The integration of solar cells affects the ability of the vehicle to produce the aerodynamic forces necessary to maintain flight. This PhD dissertation characterizes the effects of solar cell integration on the performance of a FWAV. Robo Raven, a recently developed FWAV, is used as the platform for this work. An additive manufacturing technique was developed to integrate photovoltaic solar cells into the wing and tail structures of the vehicle. An approach to characterizing the effects of solar cell integration to the wings, tail, and body of the UAV is also described. This approach includes measurement of aerodynamic forces generated by the vehicle and measurements of the wing shape during the flapping cycle using Digital Image Correlation. Various changes to wing, body, and tail design are investigated and changes in performance for each design are measured. The electrical performance from the solar cells is also characterized. A new multifunctional performance model was formulated that describes how integration of solar cells influences the flight performance. Aerodynamic models were developed to describe effects of solar cell integration force production and performance of the FWAV. Thus, performance changes can be predicted depending on changes in design. Sensing capabilities of the solar cells were also discovered and correlated to the deformation of the wing. This demonstrated that the solar cells were capable of: (1) Lightweight and flexible structure to generate aerodynamic forces, (2) Energy harvesting to extend operational time and autonomy, (3) Sensing of an aerodynamic force associated with wing deformation. Finally, different flexible photovoltaic materials with higher efficiencies are investigated, which enable the multifunctional wings to provide enough solar power to keep the FWAV aloft without batteries as long as there is enough sunlight to power the vehicle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An experimental and numerical study of turbulent fire suppression is presented. For this work, a novel and canonical facility has been developed, featuring a buoyant, turbulent, methane or propane-fueled diffusion flame suppressed via either nitrogen dilution of the oxidizer or application of a fine water mist. Flames are stabilized on a slot burner surrounded by a co-flowing oxidizer, which allows controlled delivery of either suppressant to achieve a range of conditions from complete combustion through partial and total flame quenching. A minimal supply of pure oxygen is optionally applied along the burner to provide a strengthened flame base that resists liftoff extinction and permits the study of substantially weakened turbulent flames. The carefully designed facility features well-characterized inlet and boundary conditions that are especially amenable to numerical simulation. Non-intrusive diagnostics provide detailed measurements of suppression behavior, yielding insight into the governing suppression processes, and aiding the development and validation of advanced suppression models. Diagnostics include oxidizer composition analysis to determine suppression potential, flame imaging to quantify visible flame structure, luminous and radiative emissions measurements to assess sooting propensity and heat losses, and species-based calorimetry to evaluate global heat release and combustion efficiency. The studied flames experience notable suppression effects, including transition in color from bright yellow to dim blue, expansion in flame height and structural intermittency, and reduction in radiative heat emissions. Still, measurements indicate that the combustion efficiency remains close to unity, and only near the extinction limit do the flames experience an abrupt transition from nearly complete combustion to total extinguishment. Measurements are compared with large eddy simulation results obtained using the Fire Dynamics Simulator, an open-source computational fluid dynamics software package. Comparisons of experimental and simulated results are used to evaluate the performance of available models in predicting fire suppression. Simulations in the present configuration highlight the issue of spurious reignition that is permitted by the classical eddy-dissipation concept for modeling turbulent combustion. To address this issue, simple treatments to prevent spurious reignition are developed and implemented. Simulations incorporating these treatments are shown to produce excellent agreement with the experimentally measured data, including the global combustion efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ongoing depletion of fossil fuels and the severe consequences of the greenhouse effect make the development of alternative energy systems crucially important. While hydrogen is, in principle, a promising alternative, releasing nothing but energy and pure water. Hydrogen storage is complicated and no completely viable technique has been proposed so far. This work is concerned with the study of one potential alternative to pure hydrogen: ammonia, and more specifically its storage in solids. Ammonia, NH3, can be regarded as a chemical hydrogen carrier with the advantages of strongly reduced flammability and explosiveness as compared to hydrogen. Furthermore, ammine metal salts presented here as promising ammonia stores easily store up to 50 wt.-% ammonia, giving them a volumetric energy density comparable to natural gas. The model system NiX2–NH3 ( X = Cl, Br, I) is studied thoroughly with respect to ammine salt formation, thermal decomposition, air stability and structural effects. The system CuX2–NH3 ( X = Cl, Br) has an adverse thermal decomposition behaviour, making it impractical for use as an ammonia store. This system is, however, most interesting from a structural point of view and some work concerning the study of the structural behaviour of this system is presented. Finally, close chemical relatives to the metal ammine halides, the metal ammine nitrates are studied. They exhibit interesting anion arrangements, which is an impressive showcase for the combination of diffraction and spectroscopic information. The characterisation techniques in this thesis range from powder diffraction over single crystal diffraction, spectroscopy, computational modelling, thermal analyses to gravimetric uptake experiments. Further highlights are the structure solutions and refinements from powder data of (NH4)2[NiCl4(H2O)(NH3)] and Ni(NH3)2(NO3)2, the combination of crystallographic and chemical information for the elucidation of the (NH4)2[NiCl4(H2O)(NH3)] formation reaction and the growth of single crystals under ammonia flow, a technique allowing the first documented successful growth and single crystal diffraction measurement for [Cu(NH3)6]Cl2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microfluidic technologies have great potential to help create automated, cost-effective, portable devices for rapid point of care (POC) diagnostics in diverse patient settings. Unfortunately commercialization is currently constrained by the materials, reagents, and instrumentation required and detection element performance. While most microfluidic studies utilize planar detection elements, this dissertation demonstrates the utility of porous volumetric detection elements to improve detection sensitivity and reduce assay times. Impedemetric immunoassays were performed utilizing silver enhanced gold nanoparticle immunoconjugates (AuIgGs) and porous polymer monolith or silica bead bed detection elements within a thermoplastic microchannel. For a direct assay with 10 µm spaced electrodes the detection limit was 0.13 fM AuIgG with a 3 log dynamic range. The same assay was performed with electrode spacing of 15, 40, and 100 µm with no significant difference between configurations. For a sandwich assay the detection limit was10 ng/mL with a 4 log dynamic range. While most impedemetric assays rely on expensive high resolution electrodes to enhance planar senor performance, this study demonstrates the employment of porous volumetric detection elements to achieve similar performance using lower resolution electrodes and shorter incubation times. Optical immunoassays were performed using porous volumetric capture elements perfused with refractive index matching solutions to limit light scattering and enhance signal. First, fluorescence signal enhancement was demonstrated with a porous polymer monolith within a silica capillary. Next, transmission enhancement of a direct assay was demonstrated by infusing aqueous sucrose solutions through silica bead beds with captured silver enhanced AuIgGs yielding a detection limit of 0.1 ng/mL and a 5 log dynamic range. Finally, ex situ functionalized porous silica monolith segments were integrated into thermoplastic channels for a reflectance based sandwich assay yielding a detection limit of 1 ng/mL and a 5 log dynamic range. The simple techniques for optical signal enhancement and ex situ element integration enable development of sensitive, multiplexed microfluidic sensors. Collectively the demonstrated experiments validate the use of porous volumetric detection elements to enhance impedemetric and optical microfluidic assays. The techniques rely on commercial reagents, materials compatible with manufacturing, and measurement instrumentation adaptable to POC diagnostics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Some studies have reported a ceiling effect in EQ-5D-3L, especially in healthy and/or young individuals. Recently, two further levels have been included in its measurement model (EQ-5D-5L). The purposes of this study were (1) to assess the properties of the EQ-5D-5L in comparison with the standard EQ-5D-3L in a sample of young adults, (2) to foreground the importance of collecting qualitative data to confirm, validate or refine the EQ-5D questionnaire items and (3) to raise questions pertaining to the wording in these questionnaire items. Methods The data used came from a sample of respondents aged 30 or under (n = 624). They completed both versions of the EQ-5D, which were compared in terms of feasibility, level of inconsistency and ceiling effect. Agreement between the instruments was assessed using correlation coefficients and Bland-Altman plots. Known-groups validity of the EQ-5D-5L was also assessed using non-parametric tests. The discriminative properties were compared using receiver operating characteristic curves. Finally, four interviews were conducted for retrospective reports to elicit respondents’ understanding and perceptions of the format, instructions, items, and responses. Results Quantitative results show a ceiling effect reduction of 25.3 % and a high level agreement between both indices. Known-groups validity was confirmed for the EQ-5D-5L. Explorative interviews indicated ambiguity and low degree of certainty in regards to conceptualizing differences between levels moderate-slight across three dimensions. Conclusions The EQ-5D-5L performed better than the EQ-5D-3L. However, the explorative interviews demonstrated several limitations in the EQ-5D questionnaire wording and high context-dependent answers point to lack of illnesses’ experience amongst young adults.