919 resultados para High leakage rate
Resumo:
The original solution to the high failure rate of software development projects was the imposition of an engineering approach to software development, with processes aimed at providing a repeatable structure to maintain a consistency in the ‘production process’. Despite these attempts at addressing the crisis in software development, others have argued that the rigid processes of an engineering approach did not provide the solution. The Agile approach to software development strives to change how software is developed. It does this primarily by relying on empowered teams of developers who are trusted to manage the necessary tasks, and who accept that change is a necessary part of a development project. The use of, and interest in, Agile methods in software development projects has expanded greatly, yet this has been predominantly practitioner driven. There is a paucity of scientific research on Agile methods and how they are adopted and managed. This study aims at addressing this paucity by examining the adoption of Agile through a theoretical lens. The lens used in this research is that of double loop learning theory. The behaviours required in an Agile team are the same behaviours required in double loop learning; therefore, a transition to double loop learning is required for a successful Agile adoption. The theory of triple loop learning highlights that power factors (or power mechanisms in this research) can inhibit the attainment of double loop learning. This study identifies the negative behaviours - potential power mechanisms - that can inhibit the double loop learning inherent in an Agile adoption, to determine how the Agile processes and behaviours can create these power mechanisms, and how these power mechanisms impact on double loop learning and the Agile adoption. This is a critical realist study, which acknowledges that the real world is a complex one, hierarchically structured into layers. An a priori framework is created to represent these layers, which are categorised as: the Agile context, the power mechanisms, and double loop learning. The aim of the framework is to explain how the Agile processes and behaviours, through the teams of developers and project managers, can ultimately impact on the double loop learning behaviours required in an Agile adoption. Four case studies provide further refinement to the framework, with changes required due to observations which were often different to what existing literature would have predicted. The study concludes by explaining how the teams of developers, the individual developers, and the project managers, working with the Agile processes and required behaviours, can inhibit the double loop learning required in an Agile adoption. A solution is then proposed to mitigate these negative impacts. Additionally, two new research processes are introduced to add to the Information Systems research toolkit.
Resumo:
The outcomes for both (i) radiation therapy and (ii) preclinical small animal radio- biology studies are dependent on the delivery of a known quantity of radiation to a specific and intentional location. Adverse effects can result from these procedures if the dose to the target is too high or low, and can also result from an incorrect spatial distribution in which nearby normal healthy tissue can be undesirably damaged by poor radiation delivery techniques. Thus, in mice and humans alike, the spatial dose distributions from radiation sources should be well characterized in terms of the absolute dose quantity, and with pin-point accuracy. When dealing with the steep spatial dose gradients consequential to either (i) high dose rate (HDR) brachytherapy or (ii) within the small organs and tissue inhomogeneities of mice, obtaining accurate and highly precise dose results can be very challenging, considering commercially available radiation detection tools, such as ion chambers, are often too large for in-vivo use.
In this dissertation two tools are developed and applied for both clinical and preclinical radiation measurement. The first tool is a novel radiation detector for acquiring physical measurements, fabricated from an inorganic nano-crystalline scintillator that has been fixed on an optical fiber terminus. This dosimeter allows for the measurement of point doses to sub-millimeter resolution, and has the ability to be placed in-vivo in humans and small animals. Real-time data is displayed to the user to provide instant quality assurance and dose-rate information. The second tool utilizes an open source Monte Carlo particle transport code, and was applied for small animal dosimetry studies to calculate organ doses and recommend new techniques of dose prescription in mice, as well as to characterize dose to the murine bone marrow compartment with micron-scale resolution.
Hardware design changes were implemented to reduce the overall fiber diameter to <0.9 mm for the nano-crystalline scintillator based fiber optic detector (NanoFOD) system. Lower limits of device sensitivity were found to be approximately 0.05 cGy/s. Herein, this detector was demonstrated to perform quality assurance of clinical 192Ir HDR brachytherapy procedures, providing comparable dose measurements as thermo-luminescent dosimeters and accuracy within 20% of the treatment planning software (TPS) for 27 treatments conducted, with an inter-quartile range ratio to the TPS dose value of (1.02-0.94=0.08). After removing contaminant signals (Cerenkov and diode background), calibration of the detector enabled accurate dose measurements for vaginal applicator brachytherapy procedures. For 192Ir use, energy response changed by a factor of 2.25 over the SDD values of 3 to 9 cm; however a cap made of 0.2 mm thickness silver reduced energy dependence to a factor of 1.25 over the same SDD range, but had the consequence of reducing overall sensitivity by 33%.
For preclinical measurements, dose accuracy of the NanoFOD was within 1.3% of MOSFET measured dose values in a cylindrical mouse phantom at 225 kV for x-ray irradiation at angles of 0, 90, 180, and 270˝. The NanoFOD exhibited small changes in angular sensitivity, with a coefficient of variation (COV) of 3.6% at 120 kV and 1% at 225 kV. When the NanoFOD was placed alongside a MOSFET in the liver of a sacrificed mouse and treatment was delivered at 225 kV with 0.3 mm Cu filter, the dose difference was only 1.09% with use of the 4x4 cm collimator, and -0.03% with no collimation. Additionally, the NanoFOD utilized a scintillator of 11 µm thickness to measure small x-ray fields for microbeam radiation therapy (MRT) applications, and achieved 2.7% dose accuracy of the microbeam peak in comparison to radiochromic film. Modest differences between the full-width at half maximum measured lateral dimension of the MRT system were observed between the NanoFOD (420 µm) and radiochromic film (320 µm), but these differences have been explained mostly as an artifact due to the geometry used and volumetric effects in the scintillator material. Characterization of the energy dependence for the yttrium-oxide based scintillator material was performed in the range of 40-320 kV (2 mm Al filtration), and the maximum device sensitivity was achieved at 100 kV. Tissue maximum ratio data measurements were carried out on a small animal x-ray irradiator system at 320 kV and demonstrated an average difference of 0.9% as compared to a MOSFET dosimeter in the range of 2.5 to 33 cm depth in tissue equivalent plastic blocks. Irradiation of the NanoFOD fiber and scintillator material on a 137Cs gamma irradiator to 1600 Gy did not produce any measurable change in light output, suggesting that the NanoFOD system may be re-used without the need for replacement or recalibration over its lifetime.
For small animal irradiator systems, researchers can deliver a given dose to a target organ by controlling exposure time. Currently, researchers calculate this exposure time by dividing the total dose that they wish to deliver by a single provided dose rate value. This method is independent of the target organ. Studies conducted here used Monte Carlo particle transport codes to justify a new method of dose prescription in mice, that considers organ specific doses. Monte Carlo simulations were performed in the Geant4 Application for Tomographic Emission (GATE) toolkit using a MOBY mouse whole-body phantom. The non-homogeneous phantom was comprised of 256x256x800 voxels of size 0.145x0.145x0.145 mm3. Differences of up to 20-30% in dose to soft-tissue target organs was demonstrated, and methods for alleviating these errors were suggested during whole body radiation of mice by utilizing organ specific and x-ray tube filter specific dose rates for all irradiations.
Monte Carlo analysis was used on 1 µm resolution CT images of a mouse femur and a mouse vertebra to calculate the dose gradients within the bone marrow (BM) compartment of mice based on different radiation beam qualities relevant to x-ray and isotope type irradiators. Results and findings indicated that soft x-ray beams (160 kV at 0.62 mm Cu HVL and 320 kV at 1 mm Cu HVL) lead to substantially higher dose to BM within close proximity to mineral bone (within about 60 µm) as compared to hard x-ray beams (320 kV at 4 mm Cu HVL) and isotope based gamma irradiators (137Cs). The average dose increases to the BM in the vertebra for these four aforementioned radiation beam qualities were found to be 31%, 17%, 8%, and 1%, respectively. Both in-vitro and in-vivo experimental studies confirmed these simulation results, demonstrating that the 320 kV, 1 mm Cu HVL beam caused statistically significant increased killing to the BM cells at 6 Gy dose levels in comparison to both the 320 kV, 4 mm Cu HVL and the 662 keV, 137Cs beams.
Resumo:
This paper presents the challenges encountered in modelling biofluids in microchannels. In particular blood separation implemented in a T-microchannel device is analysed. Microfluids behave different from the counterparts in the microscale and a different approach has been adopted here to model them, which emphasize the roles of viscous forces, high shear rate performance and particle interaction in microscope. A T-microchannel design is numerically analysed by means of computational fluid dynamics (CFD) to investigate the effectiveness of blood separation based on the bifurcation law and other bio-physical effects. The simulation shows that the device can separate blood cells from plasma.
Resumo:
The stencil printing process is an important process in the assembly of Surface Mount Technology (SMT)devices. There is a wide agreement in the industry that the paste printing process accounts for the majority of assembly defects. Experience with this process has shown that typically over 60% of all soldering defects are due to problems associated with the flow properties of solder pastes. Therefore, the rheological measurements can be used as a tool to study the deformation or flow experienced by the pastes during the stencil printing process. This paper presents results on the thixotropic behaviour of three pastes; lead-based solder paste, lead-free solder paste and isotropic conductive adhesive (ICA). These materials are widely used as interconnect medium in the electronics industry. Solder paste are metal alloys suspended in a flux medium while the ICAs consist of silver flakes dispersed in an epoxy resin. The thixotropy behaviour was investigated through two rheological test; (i) hysteresis loop test and (ii) steady shear rate test. In the hysteresis loop test, the shear rate were increased from 0.001 to 100s-1 and then decreased from 100 to 0.001s-1. Meanwhile, in the steady shear rate test, the materials were subjected to a constant shear rate of 0.100, 100 and 0.001s-1 for a period of 240 seconds. All the pastes showed a high degree of shear thinning behaviour with time. This might be due to the agglomeration of particles in the flux or epoxy resin that prohibits pastes flow under low shear rate. The action of high shear rate would break the agglomerates into smaller pieces which facilitates the flow of pastes, thus viscosity is reduced at high shear rate. The solder pastes exhibited a higher degree of structural breakdown compared to the ICAs. The area between the up curve and down curve in the hysteresis curve is an indication of the thixotropic behavior of the pastes. Among the three pastes, lead-free solder paste showed the largest area between the down curve and up curve, which indicating a larger structural breakdown in the pastes, followed by lead-based solder paste and ICA. In a steady shear rate test, viscosity of ICA showed the best recovery with the steeper curve to its original viscosity after the removal of shear, which indicating that the dispersion quality in ICA is good because the high shear has little effect on the microstructure of ICA. In contrast, lead-based paste showed the poorest recovery which means this paste undergo larger structural breakdown and dispersion quality in this paste is poor because the microstructure of the paste is easily disrupted by high shear. The structural breakdown during the application of shear and the recovery after removal of shear is an important characteristic in the paste printing process. If the paste’s viscosity can drop low enough, it may contribute to the aperture filling and quick recovery may prevent slumping.
Resumo:
Thermosetting polymer materials are widely utilised in modern microelectronics packaging technology. These materials are used for a number of functions, such as for device bonding, for structural support applications and for physical protection of semiconductor dies. Typically, convection heating systems are used to raise the temperature of the materials to expedite the polymerisation process. The convection cure process has a number of drawbacks including process durations generally in excess of 1 hour and the requirement to heat the entire printed circuit board assembly, inducing thermomechanical stresses which effect device reliability. Microwave energy is able to raise the temperature of materials in a rapid, controlled manner. As the microwave energy penetrates into the polymer materials, the heating can be considered volumetric – i.e. the rate of heating is approximately constant throughout the material. This enables a maximal heating rate far greater than is available with convection oven systems which only raise the surface temperature of the polymer material and rely on thermal conductivity to transfer heat energy into the bulk. The high heating rate, combined with the ability to vary the operating power of the microwave system, enables the extremely rapid cure processes. Microwave curing of a commercially available encapsulation material has been studied experimentally and through use of numerical modelling techniques. The material assessed is Henkel EO-1080, a single component thermosetting epoxy. The producer has suggested three typical convection oven cure options for EO1080: 20 min at 150C or 90 min at 140C or 120 min at 110C. Rapid curing of materials of this type using advanced microwave systems, such as the FAMOBS system [1], is of great interest to microelectronics system manufacturers as it has the potential to reduce manufacturing costs, increase device reliability and enables new device designs. Experimental analysis has demonstrated that, in a realistic chip-on-board encapsulation scenario, the polymer material can be fully cured in approximately one minute. This corresponds to a reduction in cure time of approximately 95 percent relative to the convection oven process. Numerical assessment of the process [2] also suggests that cure times of approximately 70 seconds are feasible whilst indicating that the decrease in process duration comes at the expense of variation in degree of cure within the polymer.
Resumo:
The purpose of this article is to give a report about a research related with the conditions of inclusion of students with disability in a Chilean university. This research is a quantitative, descriptive and cross-sectional study. To collect the data required, a survey was developed, which was applied to 38 students with disability. The main results reveal a high retention rate of students, who exhibit a positive perception of their inclusion in their university life and also a high level of satisfaction with most of the services provided. Seven out of ten students surveyed recognize having received some sort of education care from their programs to pursue their studies. However, there still exists a lack of connection between the current initiatives developed at the university to support the enrollment and permanence of students. Added to this fact, there is a lack of protocols and training for teachers and staff. In this study it is proposed that the university must establish a management system that defines objectives, strategies and actions that contribute to improve inclusion of people with disabilities.
Resumo:
We argue the results published by Bao-Quan Ai et al [Phys. Rev E 67, 022903 (2003)] on "correlated noise in a logistic growth model " are not correct. Their conclusion that for larger values of the correlation parameter, lambda, the cell population is peaked at x=0, which denotes the high extinction rate is also incorrect. We find the reverse behaviour corresponding to their results, that increasing lambda, promotes the stable growth of tumour cells. In particular, their results for steady-state probability, as a function of cell number, at different correlation strengths, presented in figures 1 and 2 show different behaviour than one would expect from the simple mathematical expression for the steady-state probability. Additionally, their interpretation at small values of cell number that the steady state probability increases as they increase the correlation parameter is also questionable. Another striking feature in their figures (1 and 3) is that for the same values of the parameter lambda and alpha, their simulation produces two different curves both qualitatively and quantitatively.
Resumo:
Neutral gas depletion mechanisms are investigated in a dense low-temperature argon plasma-an inductively coupled magnetic neutral loop (NL) discharge. Gas temperatures are deduced from the Doppler profile of the 772.38 nm line absorbed by argon metastable atoms. Electron density and temperature measurements reveal that at pressures below 0.1 Pa, relatively high degrees of ionization (exceeding 1%) result in electron pressures, p(e) = kT(e)n(e), exceeding the neutral gas pressure. In this regime, neutral dynamics has to be taken into account and depletion through comparatively high ionization rates becomes important. This additional depletion mechanism can be spatially separated due to non-uniform electron temperature and density profiles (non-uniform ionization rate), while the gas temperature is rather uniform within the discharge region. Spatial profiles of the depletion of metastable argon atoms in the NL region are observed by laser induced fluorescence spectroscopy. In this region, the depletion of ground state argon atoms is expected to be even more pronounced since in the investigated high electron density regime the ratio of metastable and ground state argon atom densities is governed by the electron temperature, which peaks in the NL region. This neutral gas depletion is attributed to a high ionization rate in the NL zone and fast ion loss through ambipolar diffusion along the magnetic field lines. This is totally different from what is observed at pressures above 10 Pa where the degree of ionization is relatively low (
Resumo:
This article examines the interaction between development control and economic development in the countryside within the context of contemporary debates on shifts in the agricultural sector from productivism to multi-functionality. Using planning application decisions from the case of Northern Ireland for the period 1994–95 to 2005–06, together with insights from high-level key informants with planning, economic development and environmental management expertise, the article critiques a perception that regulatory planning is in line with rural development ambitions to foster a multi-functional countryside. While the quantitative data indicate a high approval rate for economic development projects, the qualitative evidence points to limitations within the policy content and operational practices of the planning system. The article argues that regulatory planning must engage more deeply with rural development objectives.
Resumo:
Polymer nanocomposites offer the potential of enhanced properties such as increased modulus and barrier properties to the end user. Much work has been carried out on the effects of extrusion conditions on melt processed nanocomposites but very little research has been conducted on the use of polymer nanocomposites in semi-solid forming processes such as thermoforming and injection blow molding. These processes are used to make much of today’s packaging, and any improvements in performance such as possible lightweighting due to increased modulus would bring signi?cant bene?ts both economically and environmentally. The work described here looks at the biaxial deformation of polypropylene–clay nanocomposites under industrial forming conditions in order to determine if the presence of clay affects processability, structure and mechanical properties of the stretched material. Melt compounded polypropylene/clay composites in sheet form were biaxially stretched at a variety of processing conditions to examine the effect of high temperature, high strain and high strain rate processing on sheet structure
and properties.
A biaxial test rig was used to carry out the testing which imposed conditions on the sheet that are representative of those applied in injection blow molding and thermoforming. Results show that the presence of clay increases the yield stress relative to the un?lled material at typical processing temperatures and that the sensitivity of the yield stress to temperature is greater for the ?lled material. The stretching process is found to have a signi?cant effect on the delamination and alignment of clay particles (as observed by TEM) and on yield stress and elongation at break of the stretched sheet.
Resumo:
A new reconfigurable subpixel interpolation architecture for multistandard (e.g., MPEG-2, MPEG-4, H.264, and AVS) video motion estimation (ME) is presented. This exploits the mixed use of parallel and serial-input FIR filters to achieve high throughput rate and efficient silicon utilization. Silicon design studies show that this can be implemented using 34.8 × 10 3 gates with area and performance that compares very favorably with specific fixed solutions, e.g., for the H.264 standard alone. This can support SDTV and HDTV applications when implemented in 0.18 µm CMOS technology, with further performance enhancements achievable at 0.13 µm and below. © 2009 IEEE.
Resumo:
Secretory leucoprotease inhibitor (SLPI) is a neutrophil serine protease inhibitor constitutively expressed at many mucosal surfaces, including that of the lung. Originally identified as a serine protease inhibitor, it is now evident that SLPI also has antimicrobial and anti-inflammatory functions, and therefore plays an important role in host defense. Previous work has shown that some host defense proteins such as SLPI and elafin are susceptible to proteolytic degradation. Consequently, we investigated the status of SLPI in the cystic fibrosis (CF) lung. A major factor that contributes to the high mortality rate among CF patients is Pseudomonas aeruginosa infection. In this study, we report that P. aeruginosa-positive CF bronchoalveolar lavage fluid, which contains lower SLPI levels and higher neutrophil elastase (NE) activity compared with P. aeruginosa-negative samples, was particularly effective at cleaving recombinant human SLPI. Additionally, we found that only NE inhibitors were able to prevent SLPI cleavage, thereby implicating NE in this process. NE in excess was found to cleave recombinant SLPI at two novel sites in the NH(2)-terminal region and abrogate its ability to bind LPS and NF-kappaB consensus binding sites but not its ability to inhibit activity of the serine protease cathepsin G. In conclusion, this study provides evidence that SLPI is cleaved and inactivated by NE present in P. aeruginosa-positive CF lung secretions and that P. aeruginosa infection contributes to inactivation of the host defense screen in the CF lung.
Resumo:
BACKGROUND: In 2005, the European Commission recommended that all member states should establish or strengthen surveillance systems for monitoring the use of antimicrobial agents. There is no evidence in the literature of any surveillance studies having been specifically conducted in nursing homes (NHs) in Northern Ireland (NI).
OBJECTIVE: The aim of this study was to determine the prevalence of antimicrobial prescribing and its relationship with certain factors (e.g. indwelling urinary catheterization, urinary incontinence, disorientation, etc.) in NH residents in NI.
METHODS: This project was carried out in NI as part of a wider European study under the protocols of the European Surveillance of Antimicrobial Consumption group. Two point-prevalence surveys (PPSs) were conducted in 30 NHs in April and November 2009. Data were obtained from nursing notes, medication administration records and staff in relation to antimicrobial prescribing, facility and resident characteristics and were analysed descriptively.
RESULTS: The point prevalence of antimicrobial prescribing was 13.2% in April 2009 and 10.7% in November 2009, with a 10-fold difference existing between the NHs with the highest and lowest antimicrobial prescribing prevalence during both PPSs. The same NH had the highest rate of antimicrobial prescribing during both April (30.6%) and November (26.0%). The group of antimicrobials most commonly prescribed was the penicillins (April 28.6%, November 27.5%) whilst the most prevalent individual antimicrobial prescribed was trimethoprim (April 21.3%, November 24.3%). The majority of antimicrobials were prescribed for the purpose of preventing urinary tract infections (UTIs) in both April (37.8%) and in November (46.7%), with 5% of all participating residents being prescribed an antimicrobial for this reason. Some (20%) antimicrobials were prescribed at inappropriate doses, particularly those which were used for the purpose of preventing UTIs. Indwelling urinary catheterization and wounds were significant risk factors for antimicrobial use in April [odds ratio {OR} (95% CI) 2.0 (1.1, 3.5) and 1.8 (1.1, 3.0), respectively] but not in November 2009 [OR (95% CI) 1.6 (0.8, 3.2) and 1.2 (0.7, 2.2), respectively]. Other resident factors, e.g. disorientation, immobility and incontinence, were not associated with antimicrobial use. Furthermore, none of the NH characteristics investigated (e.g. number of beds, hospitalization episodes, number of general practitioners, etc.) were found to be associated with antimicrobial use in either April or November 2009.
CONCLUSIONS: This study has identified a high overall rate of antimicrobial use in NHs in NI, with variability evident both within and between homes. More research is needed to understand which factors influence antimicrobial use and to determine the appropriateness of antimicrobial prescribing in this population in general and more specifically in the management of recurrent UTIs.
Resumo:
As natural disasters continue to escalate in frequency and magnitude, NGOs are faced with numerous barriers as they attempt to implement post-disaster reconstruction (PDR) projects. In many cases, a lack of competency in key areas leads to a reduction in overall project success. This paper utilizes the competency-based framework of von Meding et al. (2010) as the starting point of its inquiry. In this context, a leading NGO responsible for the implementation of reconstruction and rehabilitation in Sri Lanka following the Asian Tsunami has been investigated in depth using a causal mapping interview procedure with key project staff. The combined barriers within this organization’s PDR operations have been identified and measured and solutions articulated. The study found that within this organization key objectives were to achieve the ‘build back better’ mantra and to effectively plan interventions in advance. The primary barriers to successful reconstruction were identified as the high turnover rate of humanitarian staff and a poor level of communication and co-operation between agencies. An essential strategy employed to combat these barriers is the consideration of staff capabilities, which links us back to competence-based theory. The results are highly valuable in the context of an ongoing wider research study on competence within humanitarian organizations.
Resumo:
Social environments, like neighbourhoods, are increasingly recognised as determinants of health. While several studies have reported an association of low neighbourhood socio-economic status with morbidity, mortality and health risk behaviour, little is known of the health effects of neighbourhood crime rates. Using the ongoing 10-Town study in Finland, we examined the relations of average household income and crime rate measured at the local area level, with smoking status and intensity by linking census data of local area characteristics from 181 postal zip codes to survey responses to smoking behaviour in a cohort of 23,008 municipal employees. Gender-stratified multilevel analyses adjusted for age and individual occupational status revealed an association between low local area income rate and current smoking. High local area crime rate was also associated with current smoking. Both local area characteristics were strongly associated with smoking intensity. Among ever-smokers, being an ex-smoker was less likely among residents in areas with low average household income and a high crime rate. In the fully adjusted model, the association between local area income and smoking behaviour among women was substantially explained by the area-level crime rate. This study extends our knowledge of potential pathways through which social environmental factors may affect health. (c) 2007 Elsevier Ltd. All rights reserved.