921 resultados para Automated instrumentation
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
This paper presents a methodology to emulate Single Event Upsets (SEUs) in FPGA flip-flops (FFs). Since the content of a FF is not modifiable through the FPGA configuration memory bits, a dedicated design is required for fault injection in the FFs. The method proposed in this paper is a hybrid approach that combines FPGA partial reconfiguration and extra logic added to the circuit under test, without modifying its operation. This approach has been integrated into a fault-injection platform, named NESSY (Non intrusive ErrorS injection SYstem), developed by our research group. Finally, this paper includes results on a Virtex-5 FPGA demonstrating the validity of the method on the ITC’99 benchmark set and a Feed-Forward Equalization (FFE) filter. In comparison with other approaches in the literature, this methodology reduces the resource consumption introduced to carry out the fault injection in FFs, at the cost of adding very little time overhead (1.6 �μs per fault).
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
As the world's synchrotrons and X-FELs endeavour to meet the need to analyse ever-smaller protein crystals, there grows a requirement for a new technique to present nano-dimensional samples to the beam for X-ray diffraction experiments.The work presented here details developmental work to reconfigure the nano tweezer technology developed by Optofluidics (PA, USA) for the trapping of nano dimensional protein crystals for X-ray crystallography experiments. The system in its standard configuration is used to trap nano particles for optical microscopy. It uses silicon nitride laser waveguides that bridge a micro fluidic channel. These waveguides contain 180 nm apertures of enabling the system to use biologically compatible 1.6 micron wavelength laser light to trap nano dimensional biological samples. Using conventional laser tweezers, the wavelength required to trap such nano dimensional samples would destroy them. The system in its optical configuration has trapped protein molecules as small as 10 nanometres.
Resumo:
As the world's synchrotrons and X-FELs endeavour to meet the need to analyse ever-smaller protein crystals, there grows a requirement for a new technique to present nano-dimensional samples to the beam for X-ray diffraction experiments.The work presented here details developmental work to reconfigure the nano tweezer technology developed by Optofluidics (PA, USA) for the trapping of nano dimensional protein crystals for X-ray crystallography experiments. The system in its standard configuration is used to trap nano particles for optical microscopy. It uses silicon nitride laser waveguides that bridge a micro fluidic channel. These waveguides contain 180 nm apertures of enabling the system to use biologically compatible 1.6 micron wavelength laser light to trap nano dimensional biological samples. Using conventional laser tweezers, the wavelength required to trap such nano dimensional samples would destroy them. The system in its optical configuration has trapped protein molecules as small as 10 nanometres.
Resumo:
Background. The optimum approach for infectious complication surveillance for cardiac implantable electronic device (CIED) procedures is unclear. We created an automated surveillance tool for infectious complications after CIED procedures. Methods. Adults having CIED procedures between January 1, 2005 and December 31, 2011 at Duke University Hospital were identified retrospectively using International Classification of Diseases, 9th revision (ICD-9) procedure codes. Potential infections were identified with combinations of ICD-9 diagnosis codes and microbiology data for 365 days postprocedure. All microbiology-identified and a subset of ICD-9 code-identified possible cases, as well as a subset of procedures without microbiology or ICD-9 codes, were reviewed. Test performance characteristics for specific queries were calculated. Results. Overall, 6097 patients had 7137 procedures. Of these, 1686 procedures with potential infectious complications were identified: 174 by both ICD-9 code and microbiology, 14 only by microbiology, and 1498 only by ICD-9 criteria. We reviewed 558 potential cases, including all 188 microbiology-identified cases, 250 randomly selected ICD-9 cases, and 120 with neither. Overall, 65 unique infections were identified, including 5 of 250 reviewed cases identified only by ICD-9 codes. Queries that included microbiology data and ICD-9 code 996.61 had good overall test performance, with sensitivities of approximately 90% and specificities of approximately 80%. Queries with ICD-9 codes alone had poor specificity. Extrapolation of reviewed infectious rates to nonreviewed cases yields an estimated rate of infection of 1.3%. Conclusions. Electronic queries with combinations of ICD-9 codes and microbiologic data can be created and have good test performance characteristics for identifying likely infectious complications of CIED procedures.
Resumo:
Computed tomography (CT) is a valuable technology to the healthcare enterprise as evidenced by the more than 70 million CT exams performed every year. As a result, CT has become the largest contributor to population doses amongst all medical imaging modalities that utilize man-made ionizing radiation. Acknowledging the fact that ionizing radiation poses a health risk, there exists the need to strike a balance between diagnostic benefit and radiation dose. Thus, to ensure that CT scanners are optimally used in the clinic, an understanding and characterization of image quality and radiation dose are essential.
The state-of-the-art in both image quality characterization and radiation dose estimation in CT are dependent on phantom based measurements reflective of systems and protocols. For image quality characterization, measurements are performed on inserts imbedded in static phantoms and the results are ascribed to clinical CT images. However, the key objective for image quality assessment should be its quantification in clinical images; that is the only characterization of image quality that clinically matters as it is most directly related to the actual quality of clinical images. Moreover, for dose estimation, phantom based dose metrics, such as CT dose index (CTDI) and size specific dose estimates (SSDE), are measured by the scanner and referenced as an indicator for radiation exposure. However, CTDI and SSDE are surrogates for dose, rather than dose per-se.
Currently there are several software packages that track the CTDI and SSDE associated with individual CT examinations. This is primarily the result of two causes. The first is due to bureaucracies and governments pressuring clinics and hospitals to monitor the radiation exposure to individuals in our society. The second is due to the personal concerns of patients who are curious about the health risks associated with the ionizing radiation exposure they receive as a result of their diagnostic procedures.
An idea that resonates with clinical imaging physicists is that patients come to the clinic to acquire quality images so they can receive a proper diagnosis, not to be exposed to ionizing radiation. Thus, while it is important to monitor the dose to patients undergoing CT examinations, it is equally, if not more important to monitor the image quality of the clinical images generated by the CT scanners throughout the hospital.
The purposes of the work presented in this thesis are threefold: (1) to develop and validate a fully automated technique to measure spatial resolution in clinical CT images, (2) to develop and validate a fully automated technique to measure image contrast in clinical CT images, and (3) to develop a fully automated technique to estimate radiation dose (not surrogates for dose) from a variety of clinical CT protocols.
Resumo:
Purpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.
Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.
Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.
Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Safeguarding organizations against opportunism and severe deception in computer-mediated communication (CMC) presents a major challenge to CIOs and IT managers. New insights into linguistic cues of deception derive from the speech acts innate to CMC. Applying automated text analysis to archival email exchanges in a CMC system as part of a reward program, we assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe deception by business partners. We empirically assess the predictive ability of our framework using an ordinal multilevel regression model. Results indicate that deceivers minimize the use of referencing and self-deprecation but include more superfluous descriptions and flattery. Deceitful channel partners also over structure their arguments and rapidly mimic the linguistic style of the account manager across dyadic e-mail exchanges. Thanks to its diagnostic value, the proposed framework can support firms’ decision-making and guide compliance monitoring system development.
Resumo:
Research in biosensing approaches as alternative techniques for food diagnostics for the detection of chemical contaminants and foodborne pathogens has increased over the last twenty years. The key component of such tests is the biorecognition element whereby polyclonal or monoclonal antibodies still dominate the market. Traditionally the screening of sera or cell culture media for the selection of polyclonal or monoclonal candidate antibodies respectively has been performed by enzyme immunoassays. For niche toxin compounds, enzyme immunoassays can be expensive and/or prohibitive methodologies for antibody production due to limitations in toxin supply for conjugate production. Automated, self-regenerating, chip-based biosensors proven in food diagnostics may be utilised as rapid screening tools for antibody candidate selection. This work describes the use of both single channel and multi-channel surface plasmon resonance (SPR) biosensors for the selection and characterisation of antibodies, and their evaluation in shellfish tissue as standard techniques for the detection of domoic acid, as a model toxin compound. The key advantages in the use of these biosensor techniques for screening hybridomas in monoclonal antibody production were the real time observation of molecular interaction and rapid turnaround time in analysis compared to enzyme immunoassays. The multichannel prototype instrument was superior with 96 analyses completed in 2h compared to 12h for the single channel and over 24h for the ELISA immunoassay. Antibodies of high sensitivity, IC50's ranging from 4.8 to 6.9ng/mL for monoclonal and 2.3-6.0ng/mL for polyclonal, for the detection of domoic acid in a 1min analysis time were selected. Although there is a progression for biosensor technology towards low cost, multiplexed portable diagnostics for the food industry, there remains a place for laboratory-based SPR instrumentation for antibody development for food diagnostics as shown herein.
Resumo:
The popularity of Computing degrees in the UK has been increasing significantly over the past number of years. In Northern Ireland, from 2007 to 2015, there has been a 40% increase in acceptances to Computer Science degrees with England seeing a 60% increase over the same period (UCAS, 2016). However, this is tainted as Computer Science degrees also continue to maintain the highest dropout rates.
In Queen’s University Belfast we currently have a Level 1 intake of over 400 students across a number of computing pathways. Our drive as staff is to empower and motivate the students to fully engage with the course content. All students take a Java programming module the aim of which is to provide an understanding of the basic principles of object-oriented design. In order to assess these skills, we have developed Jigsaw Java as an innovative assessment tool offering intelligent, semi-supervised automated marking of code.
Jigsaw Java allows students to answer programming questions using a drag-and-drop interface to place code fragments into position. Their answer is compared to the sample solution and if it matches, marks are allocated accordingly. However, if a match is not found then the corresponding code is executed using sample data to determine if its logic is acceptable. If it is, the solution is flagged to be checked by staff and if satisfactory is saved as an alternative solution. This means that appropriate marks can be allocated and should another student have submitted the same placement of code fragments this does not need to be executed or checked again. Rather the system now knows how to assess it.
Jigsaw Java is also able to consider partial marks dependent on code placement and will “learn” over time. Given the number of students, Jigsaw Java will improve the consistency and timeliness of marking.
Resumo:
Android OS supports multiple communication methods between apps. This opens the possibility to carry out threats in a collaborative fashion, c.f. the Soundcomber example from 2011. In this paper we provide a concise definition of collusion and report on a number of automated detection approaches, developed in co-operation with Intel Security.