908 resultados para Capability Maturity Model for Software
Resumo:
Salmonella enterica serovar Typhimurium has long been recognised as a zoonotic pathogen of economic significance in animals and humans. Attempts to protect humans and livestock may be based on immunization with vaccines aimed to induce a protective response. We recently demonstrated that the oral administration of a Salmonella enterica serovar Typhimurium strain unable to synthesize the zinc transporter ZnuABC is able to protect mice against systemic salmonellosis induced by a virulent homologous challenge. This finding suggested that this mutant strain could represent an interesting candidate vaccine for mucosal delivery. In this study, the protective effect of this Salmonella strain was tested in a streptomycin-pretreated mouse model of salmonellosis that is distinguished by the capability of evoking typhlitis and colitis. The here reported results demonstrate that mice immunized with Salmonella enterica serovar Typhimurium (S. Typhimurium) SA186 survive to the intestinal challenge and, compared to control mice, show a reduced number of virulent bacteria in the gut, with milder signs of inflammation. This study demonstrates that the oral administration a of S. Typhimurium strain lacking ZnuABC is able to elicit an effective immune response which protects mice against intestinal S. Typhimurium infection. These results, collectively, suggest that the streptomycin-pretreated mouse model of S. typhimurium infection can represent a valuable tool to screen S. typhimurium attenuated mutant strains and potentially help to assess their protective efficacy as potential live vaccines.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
Amplifications and deletions of chromosomal DNA, as well as copy-neutral loss of heterozygosity have been associated with diseases processes. High-throughput single nucleotide polymorphism (SNP) arrays are useful for making genome-wide estimates of copy number and genotype calls. Because neighboring SNPs in high throughput SNP arrays are likely to have dependent copy number and genotype due to the underlying haplotype structure and linkage disequilibrium, hidden Markov models (HMM) may be useful for improving genotype calls and copy number estimates that do not incorporate information from nearby SNPs. We improve previous approaches that utilize a HMM framework for inference in high throughput SNP arrays by integrating copy number, genotype calls, and the corresponding confidence scores when available. Using simulated data, we demonstrate how confidence scores control smoothing in a probabilistic framework. Software for fitting HMMs to SNP array data is available in the R package ICE.
Resumo:
Drug-induced respiratory depression is a common side effect of the agents used in anesthesia practice to provide analgesia and sedation. Depression of the ventilatory drive in the spontaneously breathing patient can lead to severe cardiorespiratory events and it is considered a primary cause of morbidity. Reliable predictions of respiratory inhibition in the clinical setting would therefore provide a valuable means to improve the safety of drug delivery. Although multiple studies investigated the regulation of breathing in man both in the presence and absence of ventilatory depressant drugs, a unified description of respiratory pharmacodynamics is not available. This study proposes a mathematical model of human metabolism and cardiorespiratory regulation integrating several isolated physiological and pharmacological aspects of acute drug-induced ventilatory depression into a single theoretical framework. The description of respiratory regulation has a parsimonious yet comprehensive structure with substantial predictive capability. Simulations relative to the synergistic interaction of the hypercarbic and hypoxic respiratory drive and the global effect of drugs on the control of breathing are in good agreement with published experimental data. Besides providing clinically relevant predictions of respiratory depression, the model can also serve as a test bed to investigate issues of drug tolerability and dose finding/control under non-steady-state conditions.
Resumo:
OBJECTIVES: Implementation of an experimental model to compare cartilage MR imaging by means of histological analyses. MATERIAL AND METHODS: MRI was obtained from 4 patients expecting total knee replacement at 1.5 and/or 3T prior surgery. The timeframe between pre-op MRI and knee replacement was within two days. Resected cartilage-bone samples were tagged with Ethi((R))-pins to reproduce the histological cutting course. Pre-operative scanning at 1.5T included following parameters for fast low angle shot (FLASH: TR/TE/FA=33ms/6ms/30 degrees , BW=110kHz, 120mmx120mm FOV, 256x256 matrix, 0.65mm slice-thickness) and double echo steady state (DESS: TR/TE/FA=23.7ms/6.9ms/40 degrees , BW=130kHz, 120x120mm FOV, 256x256 matrix, 0.65mm slice-thickness). At 3T, scan parameters were: FLASH (TR/TE/FA=12.2ms/5.1ms/10 degrees , BW=130kHz, 170x170mm FOV, 320x320, 0.5mm slice-thickness) and DESS (TR/TE/FA=15.6ms/4.5ms/25 degrees , BW=200kHz, 135mmx150mm FOV, 288x320matrix, 0.5mm slice-thickness). Imaging of the specimens was done the same day at 1.5T. MRI (Noyes) and histological (Mankin) score scales were correlated using the paired t-test. Sensitivity and specificity for the detection of different grades of cartilage degeneration were assessed. Inter-reader and intra-reader reliability was determined using Kappa analysis. RESULTS: Low correlation (sensitivity, specificity) was found for both sequences in normal to mild Mankin grades. Only moderate to severe changes were diagnosed with higher significance and specificity. The use of higher field-strengths was advantageous for both protocols with sensitivity values ranging from 13.6% to 93.3% (FLASH) and 20.5% to 96.2% (DESS). Kappa values ranged from 0.488 to 0.944. CONCLUSIONS: Correlating MR images with continuous histological slices was feasible by using three-dimensional imaging, multi-planar-reformat and marker pins. The capability of diagnosing early cartilage changes with high accuracy could not be proven for both FLASH and DESS.
Resumo:
The demands in production and associate costs at power generation through non renewable resources are increasing at an alarming rate. Solar energy is one of the renewable resource that has the potential to minimize this increase. Utilization of solar energy have been concentrated mainly on heating application. The use of solar energy in cooling systems in building would benefit greatly achieving the goal of non-renewable energy minimization. The approaches of solar energy heating system research done by initiation such as University of Wisconsin at Madison and building heat flow model research conducted by Oklahoma State University can be used to develop and optimize solar cooling building system. The research uses two approaches to develop a Graphical User Interface (GUI) software for an integrated solar absorption cooling building model, which is capable of simulating and optimizing the absorption cooling system using solar energy as the main energy source to drive the cycle. The software was then put through a number of litmus test to verify its integrity. The litmus test was conducted on various building cooling system data sets of similar applications around the world. The output obtained from the software developed were identical with established experimental results from the data sets used. Software developed by other research are catered for advanced users. The software developed by this research is not only reliable in its code integrity but also through its integrated approach which is catered for new entry users. Hence, this dissertation aims to correctly model a complete building with the absorption cooling system in appropriate climate as a cost effective alternative to conventional vapor compression system.
Resumo:
Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.
Resumo:
There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.
Resumo:
BACKGROUND: Gene therapy has been recently introduced as a novel approach to treat ischemic tissues by using the angiogenic potential of certain growth factors. We investigated the effect of adenovirus-mediated gene therapy with transforming growth factor-beta (TGF-beta) delivered into the subdermal space to treat ischemically challenged epigastric skin flaps in a rat model. MATERIAL AND METHODS: A pilot study was conducted in a group of 5 animals pretreated with Ad-GFP and expression of green fluorescent protein in the skin flap sections was demonstrated under fluorescence microscopy at 2, 4, and 7 days after the treatment, indicating a successful transfection of the skin flaps following subdermal gene therapy. Next, 30 male Sprague Dawley rats were divided into 3 groups of 10 rats each. An epigastric skin flap model, based solely on the right inferior epigastric vessels, was used as the model in this study. Rats received subdermal injections of adenovirus encoding TGF-beta (Ad-TGF-beta) or green fluorescent protein (Ad-GFP) as treatment control. The third group (n = 10) received saline and served as a control group. A flap measuring 8 x 8 cm was outlined on the abdominal skin extending from the xiphoid process proximally and the pubic region distally, to the anterior axillary lines bilaterally. Just prior to flap elevation, the injections were given subdermally in the left upper corner of the flap. The flap was then sutured back to its bed. Flap viability was evaluated seven days after the initial operation. Digital images of the epigastric flaps were taken and areas of necrotic zones relative to total flap surface area were measured and expressed as percentages by using a software program. RESULTS: There was a significant increase in mean percent surviving area between the Ad-TGF-beta group and the two other control groups (P < 0.05). (Ad-TGF-beta: 90.3 +/- 4.0% versus Ad-GFP: 82.2 +/- 8.7% and saline group: 82.6 +/- 4.3%.) CONCLUSIONS: In this study, the authors were able to demonstrate that adenovirus-mediated gene therapy using TGF-beta ameliorated ischemic necrosis in an epigastric skin flap model, as confirmed by significant reduction in the necrotic zones of the flap. The results of this study raise the possibility of using adenovirus-mediated TGF-beta gene therapy to promote perfusion in random portion of skin flaps, especially in high-risk patients.
Resumo:
In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.
Resumo:
Electrospinning (ES) can readily produce polymer fibers with cross-sectional dimensions ranging from tens of nanometers to tens of microns. Qualitative estimates of surface area coverage are rather intuitive. However, quantitative analytical and numerical methods for predicting surface coverage during ES have not been covered in sufficient depth to be applied in the design of novel materials, surfaces, and devices from ES fibers. This article presents a modeling approach to ES surface coverage where an analytical model is derived for use in quantitative prediction of surface coverage of ES fibers. The analytical model is used to predict the diameter of circular deposition areas of constant field strength and constant electrostatic force. Experimental results of polyvinyl alcohol fibers are reported and compared to numerical models to supplement the analytical model derived. The analytical model provides scientists and engineers a method for estimating surface area coverage. Both applied voltage and capillary-to-collection-plate separation are treated as independent variables for the analysis. The electric field produced by the ES process was modeled using COMSOL Multiphysics software to determine a correlation between the applied field strength and the size of the deposition area of the ES fibers. MATLAB scripts were utilized to combine the numerical COMSOL results with derived analytical equations. Experimental results reinforce the parametric trends produced via modeling and lend credibility to the use of modeling techniques for the qualitative prediction of surface area coverage from ES. (Copyright: 2014 American Vacuum Society.)
Resumo:
OBJECTIVES: To analyze computer-assisted diagnostics and virtual implant planning and to evaluate the indication for template-guided flapless surgery and immediate loading in the rehabilitation of the edentulous maxilla. MATERIALS AND METHODS: Forty patients with an edentulous maxilla were selected for this study. The three-dimensional analysis and virtual implant planning was performed with the NobelGuide software program (Nobel Biocare, Göteborg, Sweden). Prior to the computer tomography aesthetics and functional aspects were checked clinically. Either a well-fitting denture or an optimized prosthetic setup was used and then converted to a radiographic template. This allowed for a computer-guided analysis of the jaw together with the prosthesis. Accordingly, the best implant position was determined in relation to the bone structure and prospective tooth position. For all jaws, the hypothetical indication for (1) four implants with a bar overdenture and (2) six implants with a simple fixed prosthesis were planned. The planning of the optimized implant position was then analyzed as follows: the number of implants was calculated that could be placed in sufficient quantity of bone. Additional surgical procedures (guided bone regeneration, sinus floor elevation) that would be necessary due the reduced bone quality and quantity were identified. The indication of template-guided, flapless surgery or an immediate loaded protocol was evaluated. RESULTS: Model (a) - bar overdentures: for 28 patients (70%), all four implants could be placed in sufficient bone (total 112 implants). Thus, a full, flapless procedure could be suggested. For six patients (15%), sufficient bone was not available for any of their planned implants. The remaining six patients had exhibited a combination of sufficient or insufficient bone. Model (b) - simple fixed prosthesis: for 12 patients (30%), all six implants could be placed in sufficient bone (total 72 implants). Thus, a full, flapless procedure could be suggested. For seven patients (17%), sufficient bone was not available for any of their planned implants. The remaining 21 patients had exhibited a combination of sufficient or insufficient bone. DISCUSSION: In the maxilla, advanced atrophy is often observed, and implant placement becomes difficult or impossible. Thus, flapless surgery or an immediate loading protocol can be performed just in a selected number of patients. Nevertheless, the use of a computer program for prosthetically driven implant planning is highly efficient and safe. The three-dimensional view of the maxilla allows the determination of the best implant position, the optimization of the implant axis, and the definition of the best surgical and prosthetic solution for the patient. Thus, a protocol that combines a computer-guided technique with conventional surgical procedures becomes a promising option, which needs to be further evaluated and improved.
Resumo:
Few real software systems are built completely from scratch nowadays. Instead, systems are built iteratively and incrementally, while integrating and interacting with components from many other systems. Adaptation, reconfiguration and evolution are normal, ongoing processes throughout the lifecycle of a software system. Nevertheless the platforms, tools and environments we use to develop software are still largely based on an outmoded model that presupposes that software systems are closed and will not significantly evolve after deployment. We claim that in order to enable effective and graceful evolution of modern software systems, we must make these systems more amenable to change by (i) providing explicit, first-class models of software artifacts, change, and history at the level of the platform, (ii) continuously analysing static and dynamic evolution to track emergent properties, and (iii) closing the gap between the domain model and the developers' view of the evolving system. We outline our vision of dynamic, evolving software systems and identify the research challenges to realizing this vision.
Resumo:
As more and more open-source software components become available on the internet we need automatic ways to label and compare them. For example, a developer who searches for reusable software must be able to quickly gain an understanding of retrieved components. This understanding cannot be gained at the level of source code due to the semantic gap between source code and the domain model. In this paper we present a lexical approach that uses the log-likelihood ratios of word frequencies to automatically provide labels for software components. We present a prototype implementation of our labeling/comparison algorithm and provide examples of its application. In particular, we apply the approach to detect trends in the evolution of a software system.
Resumo:
For popular software systems, the number of daily submitted bug reports is high. Triaging these incoming reports is a time consuming task. Part of the bug triage is the assignment of a report to a developer with the appropriate expertise. In this paper, we present an approach to automatically suggest developers who have the appropriate expertise for handling a bug report. We model developer expertise using the vocabulary found in their source code contributions and compare this vocabulary to the vocabulary of bug reports. We evaluate our approach by comparing the suggested experts to the persons who eventually worked on the bug. Using eight years of Eclipse development as a case study, we achieve 33.6\% top-1 precision and 71.0\% top-10 recall.