45 resultados para data acquisition system
Resumo:
Training a system to recognize handwritten words is a task that requires a large amount of data with their correct transcription. However, the creation of such a training set, including the generation of the ground truth, is tedious and costly. One way of reducing the high cost of labeled training data acquisition is to exploit unlabeled data, which can be gathered easily. Making use of both labeled and unlabeled data is known as semi-supervised learning. One of the most general versions of semi-supervised learning is self-training, where a recognizer iteratively retrains itself on its own output on new, unlabeled data. In this paper we propose to apply semi-supervised learning, and in particular self-training, to the problem of cursive, handwritten word recognition. The special focus of the paper is on retraining rules that define what data are actually being used in the retraining phase. In a series of experiments it is shown that the performance of a neural network based recognizer can be significantly improved through the use of unlabeled data and self-training if appropriate retraining rules are applied.
Resumo:
Historical, i.e. pre-1957, upper-air data are a valuable source of information on the state of the atmosphere, in some parts of the world dating back to the early 20th century. However, to date, reanalyses have only partially made use of these data, and only of observations made after 1948. Even for the period between 1948 (the starting year of the NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis) and the International Geophysical Year in 1957 (the starting year of the ERA-40 reanalysis), when the global upper-air coverage reached more or less its current status, many observations have not yet been digitised. The Comprehensive Historical Upper-Air Network (CHUAN) already compiled a large collection of pre-1957 upper-air data. In the framework of the European project ERA-CLIM (European Reanalysis of Global Climate Observations), significant amounts of additional upper-air data have been catalogued (> 1.3 million station days), imaged (> 200 000 images) and digitised (> 700 000 station days) in order to prepare a new input data set for upcoming reanalyses. The records cover large parts of the globe, focussing on, so far, less well covered regions such as the tropics, the polar regions and the oceans, and on very early upper-air data from Europe and the US. The total number of digitised/inventoried records is 61/101 for moving upper-air data, i.e. data from ships, etc., and 735/1783 for fixed upper-air stations. Here, we give a detailed description of the resulting data set including the metadata and the quality checking procedures applied. The data will be included in the next version of CHUAN. The data are available at doi:10.1594/PANGAEA.821222
Resumo:
The paper showcases the field- and lab-documentation system developed for Kinneret Regional Project, an international archaeological expedition to the Northwestern shore of the Sea of Galilee (Israel) under the auspices of the University of Bern, the University of Helsinki, Leiden University and Wofford College. The core of the data management system is a fully relational, server-based database framework, which also includes time-based and static GIS services, stratigraphic analysis tools and fully indexed document/digital image archives. Data collection in the field is based on mobile, hand-held devices equipped with a custom-tailored stand-alone application. Comprehensive three-dimensional documentation of all finds and findings is achieved by means of total stations and/or high-precision GPS devices. All archaeological information retrieved in the field – including tachymetric data – is synched with the core system on the fly and thus immediately available for further processing in the field lab (within the local network) or for post-excavation analysis at remote institutions (via the WWW). Besides a short demonstration of the main functionalities, the paper also presents some of the key technologies used and illustrates usability aspects of the system’s individual components.
Resumo:
Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.
Resumo:
Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.
Resumo:
Project justification is regarded as one of the major methodological deficits in Data Warehousing practice. As reasons for applying inappropriate methods, performing incomplete evaluations, or even entirely omitting justifications, the special nature of Data Warehousing benefits and the large portion of infrastructure-related activities are stated. In this paper, the economic justification of Data Warehousing projects is analyzed, and first results from a large academiaindustry collaboration project in the field of non-technical issues of Data Warehousing are presented. As conceptual foundations, the role of the Data Warehouse system in corporate application architectures is analyzed, and the specific properties of Data Warehousing projects are discussed. Based on an applicability analysis of traditional approaches to economic IT project justification, basic steps and responsibilities for the justification of Data Warehousing projects are derived.
Resumo:
Since the late 1990s and early 2000s, derivatives of well-known designer drugs as well as new psychoactive compounds have been sold on the illicit drug market and have led to intoxications and fatalities. The LC-MS/MS screening method presented covers 31 new designer drugs as well as cathinone, methcathinone, phencyclidine, and ketamine which were included to complete the screening spectrum. All but the last two are modified molecular structures of amphetamine, tryptamine, or piperazine. Among the amphetamine derivatives are cathinone, methcathinone, 3,4-DMA, 2,5-DMA, DOB, DOET, DOM, ethylamphetamine, MDDMA, 4-MTA, PMA, PMMA, 3,4,5-TMA, TMA-6 and members of the 2C group: 2C-B, 2C-D, 2C-H, 2C-I, 2C-P, 2C-T-2, 2C-T-4, and 2C-T-7. AMT, DPT, DiPT, MiPT, DMT, and 5MeO-DMT are contained in the tryptamine group, BZP, MDBP, TFMPP, mCPP, and MeOPP in the piperazine group. Using an Applied Biosystems LC-MS/MS API 365 TurboIonSpray it is possible to identify all 35 substances. After addition of internal standards and mixed-mode solid-phase extraction the analytes are separated using a Synergi Polar RP column and gradient elution with 1 mM ammonium formate and methanol/0.1% formic acid as mobile phases A and B. Data acquisition is performed in MRM mode with positive electro spray ionization. The assay is selective for all tested substances. Limits of detection were determined by analyzing S/N-ratios and are between 1.0 and 5.0 ng/mL. Matrix effects lie between 65% and 118%, extraction efficiencies range from 72% to 90%.
Resumo:
SMARTDIAB is a platform designed to support the monitoring, management, and treatment of patients with type 1 diabetes mellitus (T1DM), by combining state-of-the-art approaches in the fields of database (DB) technologies, communications, simulation algorithms, and data mining. SMARTDIAB consists mainly of two units: 1) the patient unit (PU); and 2) the patient management unit (PMU), which communicate with each other for data exchange. The PMU can be accessed by the PU through the internet using devices, such as PCs/laptops with direct internet access or mobile phones via a Wi-Fi/General Packet Radio Service access network. The PU consists of an insulin pump for subcutaneous insulin infusion to the patient and a continuous glucose measurement system. The aforementioned devices running a user-friendly application gather patient's related information and transmit it to the PMU. The PMU consists of a diabetes data management system (DDMS), a decision support system (DSS) that provides risk assessment for long-term diabetes complications, and an insulin infusion advisory system (IIAS), which reside on a Web server. The DDMS can be accessed from both medical personnel and patients, with appropriate security access rights and front-end interfaces. The DDMS, apart from being used for data storage/retrieval, provides also advanced tools for the intelligent processing of the patient's data, supporting the physician in decision making, regarding the patient's treatment. The IIAS is used to close the loop between the insulin pump and the continuous glucose monitoring system, by providing the pump with the appropriate insulin infusion rate in order to keep the patient's glucose levels within predefined limits. The pilot version of the SMARTDIAB has already been implemented, while the platform's evaluation in clinical environment is being in progress.
Resumo:
Background Abstractor training is a key element in creating valid and reliable data collection procedures. The choice between in-person vs. remote or simultaneous vs. sequential abstractor training has considerable consequences for time and resource utilization. We conducted a web-based (webinar) abstractor training session to standardize training across six individual Cancer Research Network (CRN) sites for a study of breast cancer treatment effects in older women (BOWII). The goals of this manuscript are to describe the training session, its participants and participants' evaluation of webinar technology for abstraction training. Findings A webinar was held for all six sites with the primary purpose of simultaneously training staff and ensuring consistent abstraction across sites. The training session involved sequential review of over 600 data elements outlined in the coding manual in conjunction with the display of data entry fields in the study's electronic data collection system. Post-training evaluation was conducted via Survey Monkey©. Inter-rater reliability measures for abstractors within each site were conducted three months after the commencement of data collection. Ten of the 16 people who participated in the training completed the online survey. Almost all (90%) of the 10 trainees had previous medical record abstraction experience and nearly two-thirds reported over 10 years of experience. Half of the respondents had previously participated in a webinar, among which three had participated in a webinar for training purposes. All rated the knowledge and information delivered through the webinar as useful and reported it adequately prepared them for data collection. Moreover, all participants would recommend this platform for multi-site abstraction training. Consistent with participant-reported training effectiveness, results of data collection inter-rater agreement within sites ranged from 89 to 98%, with a weighted average of 95% agreement across sites. Conclusions Conducting training via web-based technology was an acceptable and effective approach to standardizing medical record review across multiple sites for this group of experienced abstractors. Given the substantial time and cost savings achieved with the webinar, coupled with participants' positive evaluation of the training session, researchers should consider this instructional method as part of training efforts to ensure high quality data collection in multi-site studies.
Resumo:
Magnetic resonance spectroscopy enables insight into the chemical composition of spinal cord tissue. However, spinal cord magnetic resonance spectroscopy has rarely been applied in clinical work due to technical challenges, including strong susceptibility changes in the region and the small cord diameter, which distort the lineshape and limit the attainable signal to noise ratio. Hence, extensive signal averaging is required, which increases the likelihood of static magnetic field changes caused by subject motion (respiration, swallowing), cord motion, and scanner-induced frequency drift. To avoid incoherent signal averaging, it would be ideal to perform frequency alignment of individual free induction decays before averaging. Unfortunately, this is not possible due to the low signal to noise ratio of the metabolite peaks. In this article, frequency alignment of individual free induction decays is demonstrated to improve spectral quality by using the high signal to noise ratio water peak from non-water-suppressed proton magnetic resonance spectroscopy via the metabolite cycling technique. Electrocardiography (ECG)-triggered point resolved spectroscopy (PRESS) localization was used for data acquisition with metabolite cycling or water suppression for comparison. A significant improvement in the signal to noise ratio and decrease of the Cramér Rao lower bounds of all metabolites is attained by using metabolite cycling together with frequency alignment, as compared to water-suppressed spectra, in 13 healthy volunteers.
Resumo:
We report on the wind radiometer WIRA, a new ground-based microwave Doppler-spectro-radiometer specifically designed for the measurement of middle-atmospheric horizontal wind by observing ozone emission spectra at 142.17504 GHz. Currently, wind speeds in five levels between 30 and 79 km can be retrieved which makes WIRA the first instrument able to continuously measure horizontal wind in this altitude range. For an integration time of one day the measurement error on each level lies at around 25 m s−1. With a planned upgrade this value is expected to be reduced by a factor of 2 in the near future. On the altitude levels where our measurement can be compared to wind data from the European Centre for Medium-Range Weather Forecasts (ECMWF) very good agreement in the long-term statistics as well as in short time structures with a duration of a few days has been found. WIRA uses a passive double sideband heterodyne receiver together with a digital Fourier transform spectrometer for the data acquisition. A big advantage of the radiometric approach is that such instruments can also operate under adverse weather conditions and thus provide a continuous time series for the given location. The optics enables the instrument to scan a wide range of azimuth angles including the directions east, west, north, and south for zonal and meridional wind measurements. The design of the radiometer is fairly compact and its calibration does not rely on liquid nitrogen which makes it transportable and suitable for campaign use. WIRA is conceived in a way that it can be operated remotely and does hardly require any maintenance. In the present paper, a description of the instrument is given, and the techniques used for the wind retrieval based on the determination of the Doppler shift of the measured atmospheric ozone emission spectra are outlined. Their reliability was tested using Monte Carlo simulations. Finally, a time series of 11 months of zonal wind measurements over Bern (46°57′ N, 7°26′ E) is presented and compared to ECMWF wind data.
Resumo:
Precise intraoperative assessment of the architecture of the biliary tree could reduce lesions to intra- or extrahepatic bile ducts. The aim of this study was to test feasibility of intraoperative three-dimensional imaging during liver resections. Isocentric C-arm fluoroscopy acquires three-dimensional images during a 190 degrees orbital rotation. The bile ducts were displayed three-dimensionally by realtime rotational projections or multiplanar reconstructions. The technique was established ex vivo in a preserved cadaveric human liver. Intraoperative three-dimensional cholangiography was performed in five patients with centrally located liver malignancies. Complete data acquisition in 3 patients depicted precise anatomical details of the architecture of the biliary tree up to third order divisions. Biliary imaging can be improved by the application of real-time intraoperative three-dimensional cholangiography. For the development of computer-aided navigation in hepatobiliary procedures, this technique could be an important prerequisite for defining landmarks of the liver in a three-dimensional space.
Resumo:
This article reports about the internet based, second multicenter study (MCS II) of the spine study group (AG WS) of the German trauma association (DGU). It represents a continuation of the first study conducted between the years 1994 and 1996 (MCS I). For the purpose of one common, centralised data capture methodology, a newly developed internet-based data collection system ( http://www.memdoc.org ) of the Institute for Evaluative Research in Orthopaedic Surgery of the University of Bern was used. The aim of this first publication on the MCS II was to describe in detail the new method of data collection and the structure of the developed data base system, via internet. The goal of the study was the assessment of the current state of treatment for fresh traumatic injuries of the thoracolumbar spine in the German speaking part of Europe. For that reason, we intended to collect large number of cases and representative, valid information about the radiographic, clinical and subjective treatment outcomes. Thanks to the new study design of MCS II, not only the common surgical treatment concepts, but also the new and constantly broadening spectrum of spine surgery, i.e. vertebro-/kyphoplasty, computer assisted surgery and navigation, minimal-invasive, and endoscopic techniques, documented and evaluated. We present a first statistical overview and preliminary analysis of 18 centers from Germany and Austria that participated in MCS II. A real time data capture at source was made possible by the constant availability of the data collection system via internet access. Following the principle of an application service provider, software, questionnaires and validation routines are located on a central server, which is accessed from the periphery (hospitals) by means of standard Internet browsers. By that, costly and time consuming software installation and maintenance of local data repositories are avoided and, more importantly, cumbersome migration of data into one integrated database becomes obsolete. Finally, this set-up also replaces traditional systems wherein paper questionnaires were mailed to the central study office and entered by hand whereby incomplete or incorrect forms always represent a resource consuming problem and source of error. With the new study concept and the expanded inclusion criteria of MCS II 1, 251 case histories with admission and surgical data were collected. This remarkable number of interventions documented during 24 months represents an increase of 183% compared to the previously conducted MCS I. The concept and technical feasibility of the MEMdoc data collection system was proven, as the participants of the MCS II succeeded in collecting data ever published on the largest series of patients with spinal injuries treated within a 2 year period.
Resumo:
To evaluate a triphasic injection protocol for whole-body multidetector computed tomography (MDCT) in patients with multiple trauma. Fifty consecutive patients (41 men) were examined. Contrast medium (300 mg/mL iodine) was injected starting with 70 mL at 3 mL/s, followed by 0.1 mL/s for 8 s, and by another bolus of 75 mL at 4 mL/s. CT data acquisition started 50 s after the beginning of the first injection. Two experienced, blinded readers independently measured the density in all major arteries, veins, and parenchymatous organs. Image quality was assessed using a five-point ordinal rating scale and compared to standard injection protocols [n = 25 each for late arterial chest, portovenous abdomen, and MDCT angiography (CTA)]. With the exception of the infrarenal inferior caval vein, all blood vessels were depicted with diagnostic image quality using the multiple-trauma protocol. Arterial luminal density was slightly but significantly smaller compared to CTA (P < 0.01). Veins and parenchymatous organs were opacified significantly better compared to all other protocols (P < 0.01). Arm artifacts reduced the density of spleen and liver parenchyma significantly (P < 0.01). Similarly high image quality is achieved for arteries using the multiple-trauma protocol compared to CTA, and parenchymatous organs are depicted with better image quality compared to specialized protocols. Arm artifacts should be avoided.
Resumo:
Several divergent cortical mechanisms generating multistability in visual perception have been suggested. Here, we investigated the neurophysiologic time pattern of multistable perceptual changes by means of a simultaneous recording with electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). Volunteers responded to the subjective perception of a sudden change between stable patterns of illusionary motion (multistable transition) during a stroboscopic paradigm. We found a global deceleration of the EEG frequency prior to a transition and an occipital-accentuated acceleration after a transition, as obtained by low-resolution electromagnetic tomography analysis (LORETA) analysis. A decrease in BOLD response was found in the prefrontal cortex before, and an increase after the transitions was observed in the right anterior insula, the MT/V5 regions and the SMA. The thalamus and left superior temporal gyrus showed a pattern of decrease before and increase after transitions. No such temporal course was found in the control condition. The multimodal approach of data acquisition allows us to argue that the top-down control of illusionary visual perception depends on selective attention, and that a diminution of vigilance reduces selective attention. These are necessary conditions to allow for the occurrence of a perception discontinuity in absence of a physical change of the stimulus.