884 resultados para test data generation
Resumo:
Complex systems, from environmental behaviour to electronics reliability, can now be monitored with Wireless Sensor Networks (WSN), where multiple environmental sensors are deployed in remote locations. This ensures aggregation and reading of data, at lower cost and lower power consumption. Because miniaturisation of the sensing system is hampered by the fact that discrete sensors and electronics consume board area, the development of MEMS sensors offers a promising solution. At Tyndall, the fabrication flow of multiple sensors has been made compatible with CMOS circuitry to further reduce size and cost. An ideal platform on which to host these MEMS environmental sensors is the Tyndall modular wireless mote. This paper describes the development and test of the latest sensors incorporating temperature, humidity, corrosion, and gas. It demonstrates their deployment on the Tyndall platform, allowing real-time readings, data aggregation and cross-correlation capabilities. It also presents the design of the next generation sensing platform using the novel 10mm wireless cube developed by Tyndall.
Resumo:
Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.
Resumo:
The aim of this research, which focused on the Irish adult population, was to generate information for policymakers by applying statistical analyses and current technologies to oral health administrative and survey databases. Objectives included identifying socio-demographic influences on oral health and utilisation of dental services, comparing epidemiologically-estimated dental treatment need with treatment provided, and investigating the potential of a dental administrative database to provide information on utilisation of services and the volume and types of treatment provided over time. Information was extracted from the claims databases for the Dental Treatment Benefit Scheme (DTBS) for employed adults and the Dental Treatment Services Scheme (DTSS) for less-well-off adults, the National Surveys of Adult Oral Health, and the 2007 Survey of Lifestyle Attitudes and Nutrition in Ireland. Factors associated with utilisation and retention of natural teeth were analysed using count data models and logistic regression. The chi-square test and the student’s t-test were used to compare epidemiologically-estimated need in a representative sample of adults with treatment provided. Differences were found in dental care utilisation and tooth retention by Socio-Economic Status. An analysis of the five-year utilisation behaviour of a 2003 cohort of DTBS dental attendees revealed that age and being female were positively associated with visiting annually and number of treatments. Number of adults using the DTBS increased, and mean number of treatments per patient decreased, between 1997 and 2008. As a percentage of overall treatments, restorations, dentures, and extractions decreased, while prophylaxis increased. Differences were found between epidemiologically-estimated treatment need and treatment provided for those using the DTBS and DTSS. This research confirms the utility of survey and administrative data to generate knowledge for policymakers. Public administrative databases have not been designed for research purposes, but they have the potential to provide a wealth of knowledge on treatments provided and utilisation patterns.
Resumo:
One problem in most three-dimensional (3D) scalar data visualization techniques is that they often overlook to depict uncertainty that comes with the 3D scalar data and thus fail to faithfully present the 3D scalar data and have risks which may mislead users’ interpretations, conclusions or even decisions. Therefore this thesis focuses on the study of uncertainty visualization in 3D scalar data and we seek to create better uncertainty visualization techniques, as well as to find out the advantages/disadvantages of those state-of-the-art uncertainty visualization techniques. To do this, we address three specific hypotheses: (1) the proposed Texture uncertainty visualization technique enables users to better identify scalar/error data, and provides reduced visual overload and more appropriate brightness than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (2) The proposed Linked Views and Interactive Specification (LVIS) uncertainty visualization technique enables users to better search max/min scalar and error data than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (3) The proposed Probabilistic Query uncertainty visualization technique, in comparison to traditional Direct Volume Rendering (DVR) methods, enables radiologists/physicians to better identify possible alternative renderings relevant to a diagnosis and the classification probabilities associated to the materials appeared on these renderings; this leads to improved decision support for diagnosis, as demonstrated in the domain of medical imaging. For each hypothesis, we test it by following/implementing a unified framework that consists of three main steps: the first main step is uncertainty data modeling, which clearly defines and generates certainty types of uncertainty associated to given 3D scalar data. The second main step is uncertainty visualization, which transforms the 3D scalar data and their associated uncertainty generated from the first main step into two-dimensional (2D) images for insight, interpretation or communication. The third main step is evaluation, which transforms the 2D images generated from the second main step into quantitative scores according to specific user tasks, and statistically analyzes the scores. As a result, the quality of each uncertainty visualization technique is determined.
Resumo:
The demand for optical bandwidth continues to increase year on year and is being driven primarily by entertainment services and video streaming to the home. Current photonic systems are coping with this demand by increasing data rates through faster modulation techniques, spectrally efficient transmission systems and by increasing the number of modulated optical channels per fibre strand. Such photonic systems are large and power hungry due to the high number of discrete components required in their operation. Photonic integration offers excellent potential for combining otherwise discrete system components together on a single device to provide robust, power efficient and cost effective solutions. In particular, the design of optical modulators has been an area of immense interest in recent times. Not only has research been aimed at developing modulators with faster data rates, but there has also a push towards making modulators as compact as possible. Mach-Zehnder modulators (MZM) have proven to be highly successful in many optical communication applications. However, due to the relatively weak electro-optic effect on which they are based, they remain large with typical device lengths of 4 to 7 mm while requiring a travelling wave structure for high-speed operation. Nested MZMs have been extensively used in the generation of advanced modulation formats, where multi-symbol transmission can be used to increase data rates at a given modulation frequency. Such nested structures have high losses and require both complex fabrication and packaging. In recent times, it has been shown that Electro-absorption modulators (EAMs) can be used in a specific arrangement to generate Quadrature Phase Shift Keying (QPSK) modulation. EAM based QPSK modulators have increased potential for integration and can be made significantly more compact than MZM based modulators. Such modulator designs suffer from losses in excess of 40 dB, which limits their use in practical applications. The work in this thesis has focused on how these losses can be reduced by using photonic integration. In particular, the integration of multiple lasers with the modulator structure was considered as an excellent means of reducing fibre coupling losses while maximising the optical power on chip. A significant difficultly when using multiple integrated lasers in such an arrangement was to ensure coherence between the integrated lasers. The work investigated in this thesis demonstrates for the first time how optical injection locking between discrete lasers on a single photonic integrated circuit (PIC) can be used in the generation of coherent optical signals. This was done by first considering the monolithic integration of lasers and optical couplers to form an on chip optical power splitter, before then examining the behaviour of a mutually coupled system of integrated lasers. By operating the system in a highly asymmetric coupling regime, a stable phase locking region was found between the integrated lasers. It was then shown that in this stable phase locked region the optical outputs of each laser were coherent with each other and phase locked to a common master laser.
Resumo:
This thesis details an experimental and simulation investigation of some novel all-optical signal processing techniques for future optical communication networks. These all-optical techniques include modulation format conversion, phase discrimination and clock recovery. The methods detailed in this thesis use the nonlinearities associated with semiconductor optical amplifiers (SOA) to manipulate signals in the optical domain. Chapter 1 provides an introduction into the work detailed in this thesis, discusses the increased demand for capacity in today’s optical fibre networks and finally explains why all-optical signal processing may be of interest for future optical networks. Chapter 2 discusses the relevant background information required to fully understand the all-optical techniques demonstrated in this thesis. Chapter 3 details some pump-probe measurement techniques used to calculate the gain and phase recovery times of a long SOA. A remarkably fast gain recovery is observed and the wavelength dependent nature of this recovery is investigated. Chapter 4 discusses the experimental demonstration of an all-optical modulation conversion technique which can convert on-off- keyed data into either duobinary or alternative mark inversion. In Chapter 5 a novel phase sensitive frequency conversion scheme capable of extracting the two orthogonal components of a quadrature phase modulated signal into two separate frequencies is demonstrated. Chapter 6 investigates a novel all-optical clock recovery technique for phase modulated optical orthogonal frequency division multiplexing superchannels and finally Chapter 7 provides a brief conclusion.
Resumo:
BACKGROUND: In a time-course microarray experiment, the expression level for each gene is observed across a number of time-points in order to characterize the temporal trajectories of the gene-expression profiles. For many of these experiments, the scientific aim is the identification of genes for which the trajectories depend on an experimental or phenotypic factor. There is an extensive recent body of literature on statistical methodology for addressing this analytical problem. Most of the existing methods are based on estimating the time-course trajectories using parametric or non-parametric mean regression methods. The sensitivity of these regression methods to outliers, an issue that is well documented in the statistical literature, should be of concern when analyzing microarray data. RESULTS: In this paper, we propose a robust testing method for identifying genes whose expression time profiles depend on a factor. Furthermore, we propose a multiple testing procedure to adjust for multiplicity. CONCLUSIONS: Through an extensive simulation study, we will illustrate the performance of our method. Finally, we will report the results from applying our method to a case study and discussing potential extensions.
Resumo:
Real decision makers exhibit significant shortcomings in the generation of objectives for decisions that they face. Prior research has illustrated the magnitude of this shortcoming but not its causes. In this paper, we identify two distinct impediments to the generation of decision objectives: not thinking broadly enough about the range of relevant objectives, and not thinking deeply enough to articulate every objective within the range that is considered. To test these explanations and explore ways of stimulating a more comprehensive set of objectives, we present three experiments involving a variety of interventions: the provision of sample objectives, organization of objectives by category, and direct challenges to do better, with or without a warning that important objectives are missing. The use of category names and direct challenges with a warning both led to improvements in the quantity of objectives generated without impacting their quality; other interventions yielded less improvement. We conclude by discussing the relevance of our findings to decision analysis and offering prescriptive implications for the elicitation of decision objectives. © 2010 INFORMS.
Resumo:
Background: Acute febrile respiratory illnesses, including influenza, account for a large proportion of ambulatory care visits worldwide. In the developed world, these encounters commonly result in unwarranted antibiotic prescriptions; data from more resource-limited settings are lacking. The purpose of this study was to describe the epidemiology of influenza among outpatients in southern Sri Lanka and to determine if access to rapid influenza test results was associated with decreased antibiotic prescriptions.
Methods: In this pretest- posttest study, consecutive patients presenting from March 2013- April 2014 to the Outpatient Department of the largest tertiary care hospital in southern Sri Lanka were surveyed for influenza-like illness (ILI). Patients meeting World Health Organization criteria for ILI-- acute onset of fever ≥38.0°C and cough in the prior 7 days--were enrolled. Consenting patients were administered a structured questionnaire, physical examination, and nasal/nasopharyngeal sampling. Rapid influenza A/B testing (Veritor System, Becton Dickinson) was performed on all patients, but test results were only released to patients and clinicians during the second phase of the study (December 2013- April 2014).
Results: We enrolled 397 patients with ILI, with 217 (54.7%) adults ≥12 years and 188 (47.4%) females. A total of 179 (45.8%) tested positive for influenza by rapid testing, with April- July 2013 and September- November 2013 being the periods with the highest proportion of ILI due to influenza. A total of 310 (78.1%) patients with ILI received a prescription for an antibiotic from their outpatient provider. The proportion of patients prescribed antibiotics decreased from 81.4% in the first phase to 66.3% in the second phase (p=.005); among rapid influenza-positive patients, antibiotic prescriptions decreased from 83.7% in the first phase to 56.3% in the second phase (p=.001). On multivariable analysis, having a positive rapid influenza test available to clinicians was associated with decreased antibiotic use (OR 0.20, 95% CI 0.05- 0.82).
Conclusions: Influenza virus accounted for almost 50% of acute febrile respiratory illness in this study, but most patients were prescribed antibiotics. Providing rapid influenza test results to clinicians was associated with fewer antibiotic prescriptions, but overall prescription of antibiotics remained high. In this developing country setting, a multi-faceted approach that includes improved access to rapid diagnostic tests may help decrease antibiotic use and combat antimicrobial resistance.
Resumo:
BACKGROUND: Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. METHODS AND FINDINGS: Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. CONCLUSIONS: A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.
Resumo:
BACKGROUND: The National Comprehensive Cancer Network and the American Society of Clinical Oncology have established guidelines for the treatment and surveillance of colorectal cancer (CRC), respectively. Considering these guidelines, an accurate and efficient method is needed to measure receipt of care. METHODS: The accuracy and completeness of Veterans Health Administration (VA) administrative data were assessed by comparing them with data manually abstracted during the Colorectal Cancer Care Collaborative (C4) quality improvement initiative for 618 patients with stage I-III CRC. RESULTS: The VA administrative data contained gender, marital, and birth information for all patients but race information was missing for 62.1% of patients. The percent agreement for demographic variables ranged from 98.1-100%. The kappa statistic for receipt of treatments ranged from 0.21 to 0.60 and there was a 96.9% agreement for the date of surgical resection. The percentage of post-diagnosis surveillance events in C4 also in VA administrative data were 76.0% for colonoscopy, 84.6% for physician visit, and 26.3% for carcinoembryonic antigen (CEA) test. CONCLUSIONS: VA administrative data are accurate and complete for non-race demographic variables, receipt of CRC treatment, colonoscopy, and physician visits; but alternative data sources may be necessary to capture patient race and receipt of CEA tests.
Resumo:
Current methods for large-scale wind collection are unviable in urban areas. In order to investigate the feasibility of generating power from winds in these environments, we sought to optimize placements of small vertical-axis wind turbines in areas of artificially-generated winds. We explored both vehicular transportation and architecture as sources of artificial wind, using a combination of anemometer arrays, global positioning system (GPS), and weather report data. We determined that transportation-generated winds were not significant enough for turbine implementation. In addition, safety and administrative concerns restricted the implementation of said wind turbines along roadways for transportation-generated wind collection. Wind measurements from our architecture collection were applied in models that can help predict other similar areas with artificial wind, as well as the optimal placement of a wind turbine in those areas.
Resumo:
Context. This paper is the last in a series devoted to the analysis of the binary content of the Hipparcos Catalogue. Aims. The comparison of the proper motions constructed from positions spanning a short (Hipparcos) or long time (Tycho-2) makes it possible to uncover binaries with periods of the order of or somewhat larger than the short time span (in this case, the 3 yr duration of the Hipparcos mission), since the unrecognised orbital motion will then add to the proper motion. Methods. A list of candidate proper motion binaries is constructed from a carefully designed χ2 test evaluating the statistical significance of the difference between the Tycho-2 and Hipparcos proper motions for 103 134 stars in common between the two catalogues (excluding components of visual systems). Since similar lists of proper-motion binaries have already been constructed, the present paper focuses on the evaluation of the detection efficiency of proper-motion binaries, using different kinds of control data (mostly radial velocities). The detection rate for entries from the Ninth Catalogue of Spectroscopic Binary Orbits (SB9) is evaluated, as well as for stars like barium stars, which are known to be all binaries, and finally for spectroscopic binaries identified from radial velocity data in the Geneva-Copenhagen survey of F and G dwarfs in the solar neighbourhood. Results. Proper motion binaries are efficiently detected for systems with parallaxes in excess of ∼20 mas, and periods in the range 1000-30 000 d. The shortest periods in this range (1000-2000 d, i.e. once to twice the duration of the Hipparcos mission) may appear only as DMSA/G binaries (accelerated proper motion in the Hipparcos Double and Multiple System Annex). Proper motion binaries detected among SB9 systems having periods shorter than about 400 d hint at triple systems, the proper-motion binary involving a component with a longer orbital period. A list of 19 candidate triple systems is provided. Binaries suspected of having low-mass (brown-dwarf-like) companions are listed as well. Among the 37 barium stars with parallaxes larger than 5 mas, only 7 exhibit no evidence for duplicity whatsoever (be it spectroscopic or astrometric). Finally, the fraction of proper-motion binaries shows no significant variation among the various (regular) spectral classes, when due account is taken for the detection biases. © ESO 2007.
Resumo:
The availability of a very accurate dependence graph for a scalar code is the basis for the automatic generation of an efficient parallel implementation. The strategy for this task which is encapsulated in a comprehensive data partitioning code generation algorithm is described. This algorithm involves the data partition, calculation of assignment ranges for partitioned arrays, addition of a comprehensive set of execution control masks, altering loop limits, addition and optimisation of communications for all data. In this context, the development and implementation of strategies to merge communications wherever possible has proved an important feature in producing efficient parallel implementations for numerical mesh based codes. The code generation strategies described here are embedded within the Computer Aided Parallelisation tools (CAPTools) software as a key part of a toolkit for automating as much as possible of the parallelisation process for mesh based numerical codes. The algorithms used enables parallelisation of real computational mechanics codes with only minor user interaction and without any prior manual customisation of the serial code to suit the parallelisation tool.
Resumo:
FUELCON is an expert system in nuclear engineering. Its task is optimized refueling-design, which is crucial to keep down operation costs at a plant. FUELCON proposes sets of alternative configurations of fuel-allocation; the fuel is positioned in a grid representing the core of a reactor. The practitioner of in-core fuel management uses FUELCON to generate a reasonably good configuration for the situation at hand. The domain expert, on the other hand, resorts to the system to test heuristics and discover new ones, for the task described above. Expert use involves a manual phase of revising the ruleset, based on performance during previous iterations in the same session. This paper is concerned with a new phase: the design of a neural component to carry out the revision automatically. Such an automated revision considers previous performance of the system and uses it for adaptation and learning better rules. The neural component is based on a particular schema for a symbolic to recurrent-analogue bridge, called NIPPL, and on the reinforcement learning of neural networks for the adaptation.