915 resultados para Databases, Bibliographic
Resumo:
The QUT-NOISE-SRE protocol is designed to mix the large QUT-NOISE database, consisting of over 10 hours of back- ground noise, collected across 10 unique locations covering 5 common noise scenarios, with commonly used speaker recognition datasets such as Switchboard, Mixer and the speaker recognition evaluation (SRE) datasets provided by NIST. By allowing common, clean, speech corpora to be mixed with a wide variety of noise conditions, environmental reverberant responses, and signal-to-noise ratios, this protocol provides a solid basis for the development, evaluation and benchmarking of robust speaker recognition algorithms, and is freely available to download alongside the QUT-NOISE database. In this work, we use the QUT-NOISE-SRE protocol to evaluate a state-of-the-art PLDA i-vector speaker recognition system, demonstrating the importance of designing voice-activity-detection front-ends specifically for speaker recognition, rather than aiming for perfect coherence with the true speech/non-speech boundaries.
Resumo:
Background After being discharged from hospital following the acute management of a fragility fracture, older adults may re-present to hospital emergency departments in the post-discharge period. Early re-presentation to hospital, which includes hospital readmissions, and emergency department presentations without admission, may be considered undesirable for individuals, hospital institutions and society. The identification of modifiable risk factors for hospital re-representation following initial fracture management may prove useful for informing policy or practice initiatives that seek to minimise the need for older adults to re-present to hospital early after they have been discharged from their initial inpatient care. The purpose of this systematic review is to identify correlates of hospital re-presentation in older patients who have been discharged from hospital following clinical management of fragility fractures. Methods/Design The review will follow the PRISMA-P reporting guidelines for systematic reviews. Four electronic databases (Pubmed, CINAHL, Embase, and Scopus) will be searched. A suite of search terms will identify peer-reviewed articles that have examined the correlates of hospital re-presentation in older adults (mean age of 65 years or older) who have been discharged from hospital following treatment for fragility fractures. The Effective Public Health Practice Project Quality Assessment Tool for Quantitative Studies will be used to assess the quality of the studies. The strength of evidence will be assessed through best evidence synthesis. Clinical and methodological heterogeneity across studies are likely to impede meta-analyses. Discussion The best evidence synthesis will outline correlates of hospital re-presentations in this clinical group. This synthesis will take into account potential risks of bias for each study, while permitting inclusion of findings from a range of quantitative study designs. It is anticipated that findings from the review will be useful in identifying potentially modifiable risk factors that have relevance in policy, practice and research priorities to improve the management of patients with fragility fractures. Systematic Review Registration PROSPERO CRD42015019379
Resumo:
This article considers the challenges posed to intellectual property law by the emerging field of bioinformatics. It examines the intellectual property strategies of established biotechnology companies, such as Celera Genomics, and information technology firms entering into the marketplace, such as IBM. First this paper argues that copyright law is not irrelevant to biotechnology, as some commentators would suggest. It claims that the use of copyright law and contract law is fundamental to the protection of biomedical and genomic databases. Second this article questions whether biotechnology companies are exclusively interested in patenting genes and genetics sequences. Recent evidence suggests that biotechnology companies and IT firms are patenting bioinformatics software and Internet business methods, as well as underlying instrumentation such as microarrays and genechips. Finally, this paper evaluates what impact the privatisation of bioinformatics will have on public research and scientific communication. It raises important questions about integration, interoperability, and the risks of monopoly. It finally considers whether open source software such as the Ensembl Project and peer to peer technology like DSAS will be able to counter this trend of privatisation.
Resumo:
This article examines a series of controversies within the life sciences over data sharing. Part 1 focuses upon the agricultural biotechnology firm Syngenta publishing data on the rice genome in the journal Science, and considers proposals to reform scientific publishing and funding to encourage data sharing. Part 2 examines the relationship between intellectual property rights and scientific publishing, in particular copyright protection of databases, and evaluates the declaration of the Human Genome Organisation that genomic databases should be global public goods. Part 3 looks at varying opinions on the information function of patent law, and then considers the proposals of Patrinos and Drell to provide incentives for private corporations to release data into the public domain.
Resumo:
Background An important potential clinical benefit of using capnography monitoring during procedural sedation and analgesia (PSA) is that this technology could improve patient safety by reducing serious sedation-related adverse events, such as death or permanent neurological disability, which are caused by inadequate oxygenation. The hypothesis is that earlier identification of respiratory depression using capnography leads to a change in clinical management that prevents hypoxaemia. As inadequate oxygenation/ventilation is the most common reason for injury associated with PSA, reducing episodes of hypoxaemia would indicate that using capnography would be safer than relying on standard monitoring alone. Methods/design The primary objective of this review is to determine whether using capnography during PSA in the hospital setting improves patient safety by reducing the risk of hypoxaemia (defined as an arterial partial pressure of oxygen below 60 mmHg or percentage of haemoglobin that is saturated with oxygen [SpO2] less than 90 %). A secondary objective of this review is to determine whether changes in the clinical management of sedated patients are the mediating factor for any observed impact of capnography monitoring on the rate of hypoxaemia. The potential adverse effect of capnography monitoring that will be examined in this review is the rate of inadequate sedation. Electronic databases will be searched for parallel, crossover and cluster randomised controlled trials comparing the use of capnography with standard monitoring alone during PSA that is administered in the hospital setting. Studies that included patients who received general or regional anaesthesia will be excluded from the review. Non-randomised studies will be excluded. Screening, study selection and data extraction will be performed by two reviewers. The Cochrane risk of bias tool will be used to assign a judgment about the degree of risk. Meta-analyses will be performed if suitable. Discussion This review will synthesise the evidence on an important potential clinical benefit of capnography monitoring during PSA within hospital settings. Systematic review registration: PROSPERO CRD42015023740
Resumo:
Despite substantial progress in measuring the 3D profile of anatomical variations in the human brain, their genetic and environmental causes remain enigmatic. We developed an automated system to identify and map genetic and environmental effects on brain structure in large brain MRI databases . We applied our multi-template segmentation approach ("Multi-Atlas Fluid Image Alignment") to fluidly propagate hand-labeled parameterized surface meshes into 116 scans of twins (60 identical, 56 fraternal), labeling the lateral ventricles. Mesh surfaces were averaged within subjects to minimize segmentation error. We fitted quantitative genetic models at each of 30,000 surface points to measure the proportion of shape variance attributable to (1) genetic differences among subjects, (2) environmental influences unique to each individual, and (3) shared environmental effects. Surface-based statistical maps revealed 3D heritability patterns, and their significance, with and without adjustments for global brain scale. These maps visualized detailed profiles of environmental versus genetic influences on the brain, extending genetic models to spatially detailed, automatically computed, 3D maps.
Resumo:
Despite substantial progress in measuring the anatomical and functional variability of the human brain, little is known about the genetic and environmental causes of these variations. Here we developed an automated system to visualize genetic and environmental effects on brain structure in large brain MRI databases. We applied our multi-template segmentation approach termed "Multi-Atlas Fluid Image Alignment" to fluidly propagate hand-labeled parameterized surface meshes, labeling the lateral ventricles, in 3D volumetric MRI scans of 76 identical (monozygotic, MZ) twins (38 pairs; mean age = 24.6 (SD = 1.7)); and 56 same-sex fraternal (dizygotic, DZ) twins (28 pairs; mean age = 23.0 (SD = 1.8)), scanned as part of a 5-year research study that will eventually study over 1000 subjects. Mesh surfaces were averaged within subjects to minimize segmentation error. We fitted quantitative genetic models at each of 30,000 surface points to measure the proportion of shape variance attributable to (1) genetic differences among subjects, (2) environmental influences unique to each individual, and (3) shared environmental effects. Surface-based statistical maps, derived from path analysis, revealed patterns of heritability, and their significance, in 3D. Path coefficients for the 'ACE' model that best fitted the data indicated significant contributions from genetic factors (A = 7.3%), common environment (C = 38.9%) and unique environment (E = 53.8%) to lateral ventricular volume. Earlier-maturing occipital horn regions may also be more genetically influenced than later-maturing frontal regions. Maps visualized spatially-varying profiles of environmental versus genetic influences. The approach shows promise for automatically measuring gene-environment effects in large image databases.
Resumo:
Modal flexibility is a widely accepted technique to detect structural damage using vibration characteristics. Its application to detect damage in long span large diameter cables such as those used in suspension bridge main cables has not received much attention. This paper uses the modal flexibility method incorporating two damage indices (DIs) based on lateral and vertical modes to localize damage in such cables. The competency of those DIs in damage detection is tested by the numerically obtained vibration characteristics of a suspended cable in both intact and damaged states. Three single damage cases and one multiple damage case are considered. The impact of random measurement noise in the modal data on the damage localization capability of these two DIs is next examined. Long span large diameter cables are characterized by the two critical cable parameters named bending stiffness and sag-extensibility. The influence of these parameters in the damage localization capability of the two DIs is evaluated by a parametric study with two single damage cases. Results confirm that the damage index based on lateral vibration modes has the ability to successfully detect and locate damage in suspended cables with 5% noise in modal data for a range of cable parameters. This simple approach therefore can be extended for timely damage detection in cables of suspension bridges and thereby enhance their service during their life spans.
Resumo:
Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.
Resumo:
Visual information in the form of lip movements of the speaker has been shown to improve the performance of speech recognition and search applications. In our previous work, we proposed cross database training of synchronous hidden Markov models (SHMMs) to make use of external large and publicly available audio databases in addition to the relatively small given audio visual database. In this work, the cross database training approach is improved by performing an additional audio adaptation step, which enables audio visual SHMMs to benefit from audio observations of the external audio models before adding visual modality to them. The proposed approach outperforms the baseline cross database training approach in clean and noisy environments in terms of phone recognition accuracy as well as spoken term detection (STD) accuracy.
Resumo:
Speech recognition can be improved by using visual information in the form of lip movements of the speaker in addition to audio information. To date, state-of-the-art techniques for audio-visual speech recognition continue to use audio and visual data of the same database for training their models. In this paper, we present a new approach to make use of one modality of an external dataset in addition to a given audio-visual dataset. By so doing, it is possible to create more powerful models from other extensive audio-only databases and adapt them on our comparatively smaller multi-stream databases. Results show that the presented approach outperforms the widely adopted synchronous hidden Markov models (HMM) trained jointly on audio and visual data of a given audio-visual database for phone recognition by 29% relative. It also outperforms the external audio models trained on extensive external audio datasets and also internal audio models by 5.5% and 46% relative respectively. We also show that the proposed approach is beneficial in noisy environments where the audio source is affected by the environmental noise.
Resumo:
Skin temperature is an important physiological measure that can reflect the presence of illness and injury as well as provide insight into the localised interactions between the body and the environment. The aim of this systematic review was to analyse the agreement between conductive and infrared means of assessing skin temperature which are commonly employed in in clinical, occupational, sports medicine, public health and research settings. Full-text eligibility was determined independently by two reviewers. Studies meeting the following criteria were included in the review: 1) the literature was written in English, 2) participants were human (in vivo), 3) skin surface temperature was assessed at the same site, 4) with at least two commercially available devices employed—one conductive and one infrared—and 5) had skin temperature data reported in the study. A computerised search of four electronic databases, using a combination of 21 keywords, and citation tracking was performed in January 2015. A total of 8,602 were returned. Methodology quality was assessed by 2 authors independently, using the Cochrane risk of bias tool. A total of 16 articles (n = 245) met the inclusion criteria. Devices are classified to be in agreement if they met the clinically meaningful recommendations of mean differences within ±0.5 °C and limits of agreement of ±1.0 °C. Twelve of the included studies found mean differences greater than ±0.5 °C between conductive and infrared devices. In the presence of external stimulus (e.g. exercise and/or heat) five studies foundexacerbated measurement differences between conductive and infrared devices. This is the first review that has attempted to investigate presence of any systemic bias between infrared and conductive measures by collectively evaluating the current evidence base. There was also a consistently high risk of bias across the studies, in terms of sample size, random sequence generation, allocation concealment, blinding and incomplete outcome data. This systematic review questions the suitability of using infrared cameras in stable, resting, laboratory conditions. Furthermore, both infrared cameras and thermometers in the presence of sweat and environmental heat demonstrate poor agreement when compared to conductive devices. These findings have implications for clinical, occupational, public health, sports science and research fields.
Resumo:
Background International standard practice for the correct confirmation of the central venous access device is the chest X-ray. The intracavitary electrocardiogram-based insertion method is radiation-free, and allows real-time placement verification, providing immediate treatment and reduced requirement for post-procedural repositioning. Methods Relevant databases were searched for prospective randomised controlled trials (RCTs) or quasi RCTs that compared the effectiveness of electrocardiogram-guided catheter tip positioning with placement using surface-anatomy-guided insertion plus chest X-ray confirmation. The primary outcome was accurate catheter tip placement. Secondary outcomes included complications, patient satisfaction and costs. Results Five studies involving 729 participants were included. Electrocardiogram-guided insertion was more accurate than surface anatomy guided insertion (odds ratio: 8.3; 95% confidence interval (CI) 1.38; 50.07; p=0.02). There was a lack of reporting on complications, patient satisfaction and costs. Conclusion The evidence suggests that intracavitary electrocardiogram-based positioning is superior to surface-anatomy-guided positioning of central venous access devices, leading to significantly more successful placements. This technique could potentially remove the requirement for post-procedural chest X-ray, especially during peripherally inserted central catheter (PICC) line insertion.
Resumo:
AIMS: The Framework Convention on Tobacco Control (FCTC) requires nations that have ratified the convention to ban all tobacco advertising and promotion. In the face of these restrictions, tobacco packaging has become the key promotional vehicle for the tobacco industry to interest smokers and potential smokers in tobacco products. This paper reviews available research into the probable impact of mandatory plain packaging and internal tobacco industry statements about the importance of packs as promotional vehicles. It critiques legal objections raised by the industry about plain packaging violating laws and international trade agreements. METHODS: Searches for available evidence were conducted within the internal tobacco industry documents through the online document archives; tobacco industry trade publications; research literature through the Medline and Business Source Premier databases; and grey literature including government documents, research reports and non-governmental organization papers via the Google internet search engine. RESULTS: Plain packaging of all tobacco products would remove a key remaining means for the industry to promote its products to billions of the world's smokers and future smokers. Governments have required large surface areas of tobacco packs to be used exclusively for health warnings without legal impediment or need to compensate tobacco companies. CONCLUSIONS: Requiring plain packaging is consistent with the intention to ban all tobacco promotions. There is no impediment in the FCTC to interpreting tobacco advertising and promotion to include tobacco packs.
Resumo:
Techniques to align spatio-temporal data for large-scale analysis of human group behaviour have been developed. Application of the techniques to sports databases enable sport team's characteristic styles of play to be discovered and compared for tactical analysis. Applications in surveillance to recognise group activities in real-time for person re-identification from low-resolution video footage have also been developed.