16 resultados para pacs: data handling techniques
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Index tracking has become one of the most common strategies in asset management. The index-tracking problem consists of constructing a portfolio that replicates the future performance of an index by including only a subset of the index constituents in the portfolio. Finding the most representative subset is challenging when the number of stocks in the index is large. We introduce a new three-stage approach that at first identifies promising subsets by employing data-mining techniques, then determines the stock weights in the subsets using mixed-binary linear programming, and finally evaluates the subsets based on cross validation. The best subset is returned as the tracking portfolio. Our approach outperforms state-of-the-art methods in terms of out-of-sample performance and running times.
Resumo:
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.
Resumo:
Smart homes for the aging population have recently started attracting the attention of the research community. The "health state" of smart homes is comprised of many different levels; starting with the physical health of citizens, it also includes longer-term health norms and outcomes, as well as the arena of positive behavior changes. One of the problems of interest is to monitor the activities of daily living (ADL) of the elderly, aiming at their protection and well-being. For this purpose, we installed passive infrared (PIR) sensors to detect motion in a specific area inside a smart apartment and used them to collect a set of ADL. In a novel approach, we describe a technology that allows the ground truth collected in one smart home to train activity recognition systems for other smart homes. We asked the users to label all instances of all ADL only once and subsequently applied data mining techniques to cluster in-home sensor firings. Each cluster would therefore represent the instances of the same activity. Once the clusters were associated to their corresponding activities, our system was able to recognize future activities. To improve the activity recognition accuracy, our system preprocessed raw sensor data by identifying overlapping activities. To evaluate the recognition performance from a 200-day dataset, we implemented three different active learning classification algorithms and compared their performance: naive Bayesian (NB), support vector machine (SVM) and random forest (RF). Based on our results, the RF classifier recognized activities with an average specificity of 96.53%, a sensitivity of 68.49%, a precision of 74.41% and an F-measure of 71.33%, outperforming both the NB and SVM classifiers. Further clustering markedly improved the results of the RF classifier. An activity recognition system based on PIR sensors in conjunction with a clustering classification approach was able to detect ADL from datasets collected from different homes. Thus, our PIR-based smart home technology could improve care and provide valuable information to better understand the functioning of our societies, as well as to inform both individual and collective action in a smart city scenario.
Resumo:
Several practical obstacles in data handling and evaluation complicate the use of quantitative localized magnetic resonance spectroscopy (qMRS) in clinical routine MR examinations. To overcome these obstacles, a clinically feasible MR pulse sequence protocol based on standard available MR pulse sequences for qMRS has been implemented along with newly added functionalities to the free software package jMRUI-v5.0 to make qMRS attractive for clinical routine. This enables (a) easy and fast DICOM data transfer from the MR console and the qMRS-computer, (b) visualization of combined MR spectroscopy and imaging, (c) creation and network transfer of spectroscopy reports in DICOM format, (d) integration of advanced water reference models for absolute quantification, and (e) setup of databases containing normal metabolite concentrations of healthy subjects. To demonstrate the work-flow of qMRS using these implementations, databases for normal metabolite concentration in different regions of brain tissue were created using spectroscopic data acquired in 55 normal subjects (age range 6-61 years) using 1.5T and 3T MR systems, and illustrated in one clinical case of typical brain tumor (primitive neuroectodermal tumor). The MR pulse sequence protocol and newly implemented software functionalities facilitate the incorporation of qMRS and reference to normal value metabolite concentration data in daily clinical routine. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.
Resumo:
This paper proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA) to improve end-user device energy efficiency. OPAMA enhances the standard legacy Power Save Mode (PSM) of IEEE 802.11 by taking into consideration application specific requirements combined with data aggregation techniques. By establishing a balanced cost/benefit tradeoff between performance and energy consumption, OPAMA is able to improve energy efficiency, while keeping the end-user experience at a desired level. OPAMA was assessed in the OMNeT++ simulator using real traces of variable bitrate video streaming applications. The results showed the capability to enhance energy efficiency, achieving savings up to 44% when compared with the IEEE 802.11 legacy PSM.
Resumo:
The global ocean is a significant sink for anthropogenic carbon (Cant), absorbing roughly a third of human CO2 emitted over the industrial period. Robust estimates of the magnitude and variability of the storage and distribution of Cant in the ocean are therefore important for understanding the human impact on climate. In this synthesis we review observational and model-based estimates of the storage and transport of Cant in the ocean. We pay particular attention to the uncertainties and potential biases inherent in different inference schemes. On a global scale, three data-based estimates of the distribution and inventory of Cant are now available. While the inventories are found to agree within their uncertainty, there are considerable differences in the spatial distribution. We also present a review of the progress made in the application of inverse and data assimilation techniques which combine ocean interior estimates of Cant with numerical ocean circulation models. Such methods are especially useful for estimating the air–sea flux and interior transport of Cant, quantities that are otherwise difficult to observe directly. However, the results are found to be highly dependent on modeled circulation, with the spread due to different ocean models at least as large as that from the different observational methods used to estimate Cant. Our review also highlights the importance of repeat measurements of hydrographic and biogeochemical parameters to estimate the storage of Cant on decadal timescales in the presence of the variability in circulation that is neglected by other approaches. Data-based Cant estimates provide important constraints on forward ocean models, which exhibit both broad similarities and regional errors relative to the observational fields. A compilation of inventories of Cant gives us a "best" estimate of the global ocean inventory of anthropogenic carbon in 2010 of 155 ± 31 PgC (±20% uncertainty). This estimate includes a broad range of values, suggesting that a combination of approaches is necessary in order to achieve a robust quantification of the ocean sink of anthropogenic CO2.
Resumo:
The widespread deployment of wireless mobile communications enables an almost permanent usage of portable devices, which imposes high demands on the battery of these devices. Indeed, battery lifetime is becoming one the most critical factors on the end-users satisfaction when using wireless communications. In this work, the optimized power save algorithm for continuous media applications (OPAMA) is proposed, aiming at enhancing the energy efficiency on end-users devices. By combining the application specific requirements with data aggregation techniques, {OPAMA} improves the standard {IEEE} 802.11 legacy Power Save Mode (PSM) performance. The algorithm uses the feedback on the end-user expected quality to establish a proper tradeoff between energy consumption and application performance. {OPAMA} was assessed in the OMNeT++ simulator, using real traces of variable bitrate video streaming applications, and in a real testbed employing a novel methodology intended to perform an accurate evaluation concerning video Quality of Experience (QoE) perceived by the end-users. The results revealed the {OPAMA} capability to enhance energy efficiency without degrading the end-user observed QoE, achieving savings up to 44 when compared with the {IEEE} 802.11 legacy PSM.
Resumo:
The ATLAS experiment at the LHC has measured the production cross section of events with two isolated photons in the final state, in proton-proton collisions at root s = 7 TeV. The full data set collected in 2011, corresponding to an integrated luminosity of 4.9 fb(-1), is used. The amount of background, from hadronic jets and isolated electrons, is estimated with data-driven techniques and subtracted. The total cross section, for two isolated photons with transverse energies above 25 GeV and 22 GeV respectively, in the acceptance of the electromagnetic calorimeter (vertical bar eta vertical bar < 1.37 and 1.52 < vertical bar eta vertical bar 2.37) and with an angular separation Delta R > 0.4, is 44.0(-4.2)(+3.2) pb. The differential cross sections as a function of the di-photon invariant mass, transverse momentum, azimuthal separation, and cosine of the polar angle of the largest transverse energy photon in the Collins-Soper di-photon rest frame are also measured. The results are compared to the prediction of leading-order parton-shower and next-to-leading-order and next-to-next-to-leading-order parton-level generators.
Resumo:
Stray light contamination reduces considerably the precision of photometric of faint stars for low altitude spaceborne observatories. When measuring faint objects, the necessity of coping with stray light contamination arises in order to avoid systematic impacts on low signal-to-noise images. Stray light contamination can be represented by a flat offset in CCD data. Mitigation techniques begin by a comprehensive study during the design phase, followed by the use of target pointing optimisation and post-processing methods. We present a code that aims at simulating the stray-light contamination in low-Earth orbit coming from reflexion of solar light by the Earth. StrAy Light SimulAtor (SALSA) is a tool intended to be used at an early stage as a tool to evaluate the effective visible region in the sky and, therefore to optimise the observation sequence. SALSA can compute Earth stray light contamination for significant periods of time allowing missionwide parameters to be optimised (e.g. impose constraints on the point source transmission function (PST) and/or on the altitude of the satellite). It can also be used to study the behaviour of the stray light at different seasons or latitudes. Given the position of the satellite with respect to the Earth and the Sun, SALSA computes the stray light at the entrance of the telescope following a geometrical technique. After characterising the illuminated region of the Earth, the portion of illuminated Earth that affects the satellite is calculated. Then, the flux of reflected solar photons is evaluated at the entrance of the telescope. Using the PST of the instrument, the final stray light contamination at the detector is calculated. The analysis tools include time series analysis of the contamination, evaluation of the sky coverage and an objects visibility predictor. Effects of the South Atlantic Anomaly and of any shutdown periods of the instrument can be added. Several designs or mission concepts can be easily tested and compared. The code is not thought as a stand-alone mission designer. Its mandatory inputs are a time series describing the trajectory of the satellite and the characteristics of the instrument. This software suite has been applied to the design and analysis of CHEOPS (CHaracterizing ExOPlanet Satellite). This mission requires very high precision photometry to detect very shallow transits of exoplanets. Different altitudes and characteristics of the detector have been studied in order to find the best parameters, that reduce the effect of contamination. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
Crowdsourcing linguistic phenomena with smartphone applications is relatively new. In linguistics, apps have predominantly been developed to create pronunciation dictionaries, to train acoustic models, and to archive endangered languages. This paper presents the first account of how apps can be used to collect data suitable for documenting language change: we created an app, Dialäkt Äpp (DÄ), which predicts users’ dialects. For 16 linguistic variables, users select a dialectal variant from a drop-down menu. DÄ then geographically locates the user’s dialect by suggesting a list of communes where dialect variants most similar to their choices are used. Underlying this prediction are 16 maps from the historical Linguistic Atlas of German-speaking Switzerland, which documents the linguistic situation around 1950. Where users disagree with the prediction, they can indicate what they consider to be their dialect’s location. With this information, the 16 variables can be assessed for language change. Thanks to the playfulness of its functionality, DÄ has reached many users; our linguistic analyses are based on data from nearly 60,000 speakers. Results reveal a relative stability for phonetic variables, while lexical and morphological variables seem more prone to change. Crowdsourcing large amounts of dialect data with smartphone apps has the potential to complement existing data collection techniques and to provide evidence that traditional methods cannot, with normal resources, hope to gather. Nonetheless, it is important to emphasize a range of methodological caveats, including sparse knowledge of users’ linguistic backgrounds (users only indicate age, sex) and users’ self-declaration of their dialect. These are discussed and evaluated in detail here. Findings remain intriguing nevertheless: as a means of quality control, we report that traditional dialectological methods have revealed trends similar to those found by the app. This underlines the validity of the crowdsourcing method. We are presently extending DÄ architecture to other languages.
Resumo:
OBJECTIVES The purpose of the study was to provide empirical evidence about the reporting of methodology to address missing outcome data and the acknowledgement of their impact in Cochrane systematic reviews in the mental health field. METHODS Systematic reviews published in the Cochrane Database of Systematic Reviews after January 1, 2009 by three Cochrane Review Groups relating to mental health were included. RESULTS One hundred ninety systematic reviews were considered. Missing outcome data were present in at least one included study in 175 systematic reviews. Of these 175 systematic reviews, 147 (84%) accounted for missing outcome data by considering a relevant primary or secondary outcome (e.g., dropout). Missing outcome data implications were reported only in 61 (35%) systematic reviews and primarily in the discussion section by commenting on the amount of the missing outcome data. One hundred forty eligible meta-analyses with missing data were scrutinized. Seventy-nine (56%) of them had studies with total dropout rate between 10 and 30%. One hundred nine (78%) meta-analyses reported to have performed intention-to-treat analysis by including trials with imputed outcome data. Sensitivity analysis for incomplete outcome data was implemented in less than 20% of the meta-analyses. CONCLUSIONS Reporting of the techniques for handling missing outcome data and their implications in the findings of the systematic reviews are suboptimal.
Resumo:
User interfaces are key properties of Business-to-Consumer (B2C) systems, and Web-based reservation systems are an important class of B2C systems. In this paper we show that these systems use a surprisingly broad spectrum of different approaches to handling temporal data in their Web inter faces. Based on these observations and on a literature analysis we develop a Morphological Box to present the main options for handling temporal data and give examples. The results indicate that the present state of developing and maintaining B2C systems has not been much influenced by modern Web Engi neering concepts and that there is considerable potential for improvement.