78 resultados para Elements, High Trhoughput Data, elettrofisiologia, elaborazione dati, analisi Real Time


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The speeds of sound in dibromomethane, bromochloromethane, and dichloromethane have been measured in the temperature range from 293.15 to 313.15 K and at pressures up to 100 MPa. Densities and isobaric heat capacities at atmospheric pressure have been also determined. Experimental results were used to calculate the densities and isobaric heat capacities as the function of temperature and pressure by means of a numerical integration technique. Moreover, experimental data at atmospheric pressure were then used to determine the SAFT-VR Mie molecular parameters for these liquids. The accuracy of the model has been then evaluated using a comparison of derived experimental high-pressure data with those predicted using SAFT. It was found that the model provide the possibility to predict also the isobaric heat capacity of all selected haloalkanes within an error up to 6%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inter-dealer trading in US Treasury securities is almost equally divided between two electronic trading platforms that have only slight differences in terms of their relative liquidity and transparency. BrokerTec is more active in the trading of 2-, 5-, and 10-year T-notes while eSpeed has more active trading in the 30-year bond. Over the period studied, eSpeed provides a more pre-trade transparent platform than BrokerTec. We examine the contribution to ‘price discovery’ of activity in the two platforms using high frequency data. We find that price discovery does not derive equally from the two platforms and that the shares vary across term to maturity. This can be traced to differential trading activities and transparency of the two platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the variability of wind power, it is imperative to accurately and timely forecast the wind generation to enhance the flexibility and reliability of the operation and control of real-time power. Special events such as ramps, spikes are hard to predict with traditional methods using solely recently measured data. In this paper, a new Gaussian Process model with hybrid training data taken from both the local time and historic dataset is proposed and applied to make short-term predictions from 10 minutes to one hour ahead. A key idea is that the similar pattern data in history are properly selected and embedded in Gaussian Process model to make predictions. The results of the proposed algorithms are compared to those of standard Gaussian Process model and the persistence model. It is shown that the proposed method not only reduces magnitude error but also phase error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The properties of Ellerman bombs (EBs), small-scale brightenings in the Hα line wings, have proved difficult to establish because their size is close to the spatial resolution of even the most advanced telescopes. Here, we aim to infer the size and lifetime of EBs using high-resolution data of an emerging active region collected using the Interferometric BIdimensional Spectrometer (IBIS) and Rapid Oscillations of the Solar Atmosphere (ROSA) instruments as well as the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). We develop an algorithm to track EBs through their evolution, finding that EBs can often be much smaller (around 0.3″) and shorter-lived (less than one minute) than previous estimates. A correlation between G-band magnetic bright points and EBs is also found. Combining SDO/HMI and G-band data gives a good proxy of the polarity for the vertical magnetic field. It is found that EBs often occur both over regions of opposite polarity flux and strong unipolar fields, possibly hinting at magnetic reconnection as a driver of these events.The energetics of EB events is found to follow a power-law distribution in the range of a nanoflare (1022-25 ergs).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, our previous work on Principal Component Analysis (PCA) based fault detection method is extended to the dynamic monitoring and detection of loss-of-main in power systems using wide-area synchrophasor measurements. In the previous work, a static PCA model was built and verified to be capable of detecting and extracting system faulty events; however the false alarm rate is high. To address this problem, this paper uses a well-known ‘time lag shift’ method to include dynamic behavior of the PCA model based on the synchronized measurements from Phasor Measurement Units (PMU), which is named as the Dynamic Principal Component Analysis (DPCA). Compared with the static PCA approach as well as the traditional passive mechanisms of loss-of-main detection, the proposed DPCA procedure describes how the synchrophasors are linearly
auto- and cross-correlated, based on conducting the singular value decomposition on the augmented time lagged synchrophasor matrix. Similar to the static PCA method, two statistics, namely T2 and Q with confidence limits are calculated to form intuitive charts for engineers or operators to monitor the loss-of-main situation in real time. The effectiveness of the proposed methodology is evaluated on the loss-of-main monitoring of a real system, where the historic data are recorded from PMUs installed in several locations in the UK/Ireland power system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extrusion is one of the major methods for processing polymeric materials and the thermal homogeneity of the process output is a major concern for manufacture of high quality extruded products. Therefore, accurate process thermal monitoring and control are important for product quality control. However, most industrial extruders use single point thermocouples for the temperature monitoring/control although their measurements are highly affected by the barrel metal wall temperature. Currently, no industrially established thermal profile measurement technique is available. Furthermore, it has been shown that the melt temperature changes considerably with the die radial position and hence point/bulk measurements are not sufficient for monitoring and control of the temperature across the melt flow. The majority of process thermal control methods are based on linear models which are not capable of dealing with process nonlinearities. In this work, the die melt temperature profile of a single screw extruder was monitored by a thermocouple mesh technique. The data obtained was used to develop a novel approach of modelling the extruder die melt temperature profile under dynamic conditions (i.e. for predicting the die melt temperature profile in real-time). These newly proposed models were in good agreement with the measured unseen data. They were then used to explore the effects of process settings, material and screw geometry on the die melt temperature profile. The results showed that the process thermal homogeneity was affected in a complex manner by changing the process settings, screw geometry and material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a framework for a telecommunications interface which allows data from sensors embedded in Smart Grid applications to reliably archive data in an appropriate time-series database. The challenge in doing so is two-fold, firstly the various formats in which sensor data is represented, secondly the problems of telecoms reliability. A prototype of the authors' framework is detailed which showcases the main features of the framework in a case study featuring Phasor Measurement Units (PMU) as the application. Useful analysis of PMU data is achieved whenever data from multiple locations can be compared on a common time axis. The prototype developed highlights its reliability, extensibility and adoptability; features which are largely deferred from industry standards for data representation to proprietary database solutions. The open source framework presented provides link reliability for any type of Smart Grid sensor and is interoperable with existing proprietary database systems, and open database systems. The features of the authors' framework allow for researchers and developers to focus on the core of their real-time or historical analysis applications, rather than having to spend time interfacing with complex protocols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this single centre study of childhood acute lymphoblastic leukaemia (ALL) patients treated on the Medical Research Council UKALL 97/99 protocols, it was determined that minimal residual disease (MRD) detected by real time quantitative polymerase chain reaction (RQ-PCR) and 3-colour flow cytometry (FC) displayed high levels of qualitative concordance when evaluated at multiple time-points during treatment (93.38%), and a combined use of both approaches allowed a multi time-point evaluation of MRD kinetics for 90% (53/59) of the initial cohort. At diagnosis, MRD markers with sensitivity of at least 0.01% were identified by RQ-PCR detection of fusion gene transcripts, IGH/TRG rearrangements, and FC. Using a combined RQ-PCR and FC approach, the evaluation of 367 follow-up BM samples revealed that the detection of MRD >1% at Day 15 (P = 0.04), >0.01% at the end of induction (P = 0.02), >0.01% at the end of consolidation (P = 0.01), >0.01% prior to the first delayed intensification (P = 0.01), and >0.1% prior to the second delayed intensification and continued maintenance (P = 0.001) were all associated with relapse and, based on early time-points (end of induction and consolidation) a significant log-rank trend (P = 0.0091) was noted between survival curves for patients stratified into high, intermediate and low-risk MRD groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: High risk medications are commonly prescribed to older US patients. Currently, less is known about high risk medication prescribing in other Western Countries, including the UK. We measured trends and correlates of high risk medication prescribing in a subset of the older UK population (community/institutionalized) to inform harm minimization efforts. Methods: Three cross-sectional samples from primary care electronic clinical records (UK Clinical Practice Research Datalink, CPRD) in fiscal years 2003/04, 2007/08 and 2011/12 were taken. This yielded a sample of 13,900 people aged 65 years or over from 504 UK general practices. High risk medications were defined by 2012 Beers Criteria adapted for the UK. Using descriptive statistical methods and regression modelling, prevalence of ‘any’ (drugs prescribed at least once per year) and ‘long-term’ (drugs prescribed all quarters of year) high risk medication prescribing and correlates were determined. Results: While polypharmacy rates have risen sharply, high risk medication prevalence has remained stable across a decade. A third of older (65+) people are exposed to high risk medications, but only half of the total prevalence was long-term (any = 38.4 % [95 % CI: 36.3, 40.5]; long-term = 17.4 % [15.9, 19.9] in 2011/12). Long-term but not any high risk medication exposure was associated with older ages (85 years or over). Women and people with higher polypharmacy burden were at greater risk of exposure; lower socio-economic status was not associated. Ten drugs/drug classes accounted for most of high risk medication prescribing in 2011/12. Conclusions: High risk medication prescribing has not increased over time against a background of increasing polypharmacy in the UK. Half of patients receiving high risk medications do so for less than a year. Reducing or optimising the use of a limited number of drugs could dramatically reduce high risk medications in older people. Further research is needed to investigate why the oldest old and women are at greater risk. Interventions to reduce high risk medications may need to target shorter and long-term use separately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many applications, and especially those where batch processes are involved, a target scalar output of interest is often dependent on one or more time series of data. With the exponential growth in data logging in modern industries such time series are increasingly available for statistical modeling in soft sensing applications. In order to exploit time series data for predictive modelling, it is necessary to summarise the information they contain as a set of features to use as model regressors. Typically this is done in an unsupervised fashion using simple techniques such as computing statistical moments, principal components or wavelet decompositions, often leading to significant information loss and hence suboptimal predictive models. In this paper, a functional learning paradigm is exploited in a supervised fashion to derive continuous, smooth estimates of time series data (yielding aggregated local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The proposed Supervised Aggregative Feature Extraction (SAFE) methodology can be extended to support nonlinear predictive models by embedding the functional learning framework in a Reproducing Kernel Hilbert Spaces setting. SAFE has a number of attractive features including closed form solution and the ability to explicitly incorporate first and second order derivative information. Using simulation studies and a practical semiconductor manufacturing case study we highlight the strengths of the new methodology with respect to standard unsupervised feature extraction approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realising memory intensive applications such as image and video processing on FPGA requires creation of complex, multi-level memory hierarchies to achieve real-time performance; however commerical High Level Synthesis tools are unable to automatically derive such structures and hence are unable to meet the demanding bandwidth and capacity constraints of these applications. Current approaches to solving this problem can only derive either single-level memory structures or very deep, highly inefficient hierarchies, leading in either case to one or more of high implementation cost and low performance. This paper presents an enhancement to an existing MC-HLS synthesis approach which solves this problem; it exploits and eliminates data duplication at multiple levels levels of the generated hierarchy, leading to a reduction in the number of levels and ultimately higher performance, lower cost implementations. When applied to synthesis of C-based Motion Estimation, Matrix Multiplication and Sobel Edge Detection applications, this enables reductions in Block RAM and Look Up Table (LUT) cost of up to 25%, whilst simultaneously increasing throughput.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software-programmable `soft' processors have shown tremendous potential for efficient realisation of high performance signal processing operations on Field Programmable Gate Array (FPGA), whilst lowering the design burden by avoiding the need to design fine-grained custom circuit archi-tectures. However, the complex data access patterns, high memory bandwidth and computational requirements of sliding window applications, such as Motion Estimation (ME) and Matrix Multiplication (MM), lead to low performance, inefficient soft processor realisations. This paper resolves this issue, showing how by adding support for block data addressing and accelerators for high performance loop execution, performance and resource efficiency over four times better than current best-in-class metrics can be achieved. In addition, it demonstrates the first recorded real-time soft ME estimation realisation for H.263 systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to protect user privacy on mobile devices, an event-driven implicit authentication scheme is proposed in this paper. Several methods of utilizing the scheme for recognizing legitimate user behavior are investigated. The investigated methods compute an aggregate score and a threshold in real-time to determine the trust level of the current user using real data derived from user interaction with the device. The proposed scheme is designed to: operate completely in the background, require minimal training period, enable high user recognition rate for implicit authentication, and prompt detection of abnormal activity that can be used to trigger explicitly authenticated access control. In this paper, we investigate threshold computation through standard deviation and EWMA (exponentially weighted moving average) based algorithms. The result of extensive experiments on user data collected over a period of several weeks from an Android phone indicates that our proposed approach is feasible and effective for lightweight real-time implicit authentication on mobile smartphones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since 1999, the rapid, inexpensive and non-destructive use of Th/K and Th/U ratios from spectral gamma ray measurements have been used as a proxy for changes in palaeo-hinterland weathering. This model is tested here by analysis of in situ palaeoweathering horizons where clay mineral contents are well-known. A residual palaeoweathered horizon of Palaeogene laterite (developed on basalt) has been logged at 14 locations across N. Ireland using spectral gamma ray detectors. The results are compared to published elemental and mineralogical data. While the model of K and U loss during the early stages of weathering to smectite and kaolinite is supported, the formation (during progressively more advanced weathering) of gibbsite and iron oxides has reversed the predicted pattern and caused U and Th retention in the weathering profile. The severity (duration, humidity) of weathering and palaeoweathering may be estimated using Th/K ratios as a proxy. The use of Th/U ratios is more problematic should detrital gibbsite (or similar clays) or iron oxides be detected. Mineralogical analysis is needed in order to evaluate the hosts for K, U and Th: nonetheless, the spectral gamma ray machine offers a real-time, inexpensive and effective tool for the preliminary or conjunctive assessment of degrees of weathering or palaeoweathering.