66 resultados para process data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The process of accounting for heterogeneity has made significant advances in statistical research, primarily in the framework of stochastic analysis and the development of multiple-point statistics (MPS). Among MPS techniques, the direct sampling (DS) method is tested to determine its ability to delineate heterogeneity from aerial magnetics data in a regional sandstone aquifer intruded by low-permeability volcanic dykes in Northern Ireland, UK. The use of two two-dimensional bivariate training images aids in creating spatial probability distributions of heterogeneities of hydrogeological interest, despite relatively ‘noisy’ magnetics data (i.e. including hydrogeologically irrelevant urban noise and regional geologic effects). These distributions are incorporated into a hierarchy system where previously published density function and upscaling methods are applied to derive regional distributions of equivalent hydraulic conductivity tensor K. Several K models, as determined by several stochastic realisations of MPS dyke locations, are computed within groundwater flow models and evaluated by comparing modelled heads with field observations. Results show a significant improvement in model calibration when compared to a simplistic homogeneous and isotropic aquifer model that does not account for the dyke occurrence evidenced by airborne magnetic data. The best model is obtained when normal and reverse polarity dykes are computed separately within MPS simulations and when a probability threshold of 0.7 is applied. The presented stochastic approach also provides improvement when compared to a previously published deterministic anisotropic model based on the unprocessed (i.e. noisy) airborne magnetics. This demonstrates the potential of coupling MPS to airborne geophysical data for regional groundwater modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a design methodology for low-power high-performance, process-variation tolerant architecture for arithmetic units. The novelty of our approach lies in the fact that possible delay failures due to process variations and/or voltage scaling are predicted in advance and addressed by employing an elastic clocking technique. The prediction mechanism exploits the dependence of delay of arithmetic units upon input data patterns and identifies specific inputs that activate the critical path. Under iso-yield conditions, the proposed design operates at a lower scaled down Vdd without any performance degradation, while it ensures a superlative yield under a design style employing nominal supply and transistor threshold voltage. Simulation results show power savings of upto 29%, energy per computation savings of upto 25.5% and yield enhancement of upto 11.1% compared to the conventional adders and multipliers implemented in the 70nm BPTM technology. We incorporated the proposed modules in the execution unit of a five stage DLX pipeline to measure performance using SPEC2000 benchmarks [9]. Maximum area and throughput penalty obtained were 10% and 3% respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Future digital signal processing (DSP) systems must provide robustness on algorithm and application level to the presence of reliability issues that come along with corresponding implementations in modern semiconductor process technologies. In this paper, we address this issue by investigating the impact of unreliable memories on general DSP systems. In particular, we propose a novel framework to characterize the effects of unreliable memories, which enables us to devise novel methods to mitigate the associated performance loss. We propose to deploy specifically designed data representations, which have the capability of substantially improving the system reliability compared to that realized by conventional data representations used in digital integrated circuits, such as 2's-complement or sign-magnitude number formats. To demonstrate the efficacy of the proposed framework, we analyze the impact of unreliable memories on coded communication systems, and we show that the deployment of optimized data representations substantially improves the error-rate performance of such systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent technological advances have increased the quantity of movement data being recorded. While valuable knowledge can be gained by analysing such data, its sheer volume creates challenges. Geovisual analytics, which helps the human cognition process by using tools to reason about data, offers powerful techniques to resolve these challenges. This paper introduces such a geovisual analytics environment for exploring movement trajectories, which provides visualisation interfaces, based on the classic space-time cube. Additionally, a new approach, using the mathematical description of motion within a space-time cube, is used to determine the similarity of trajectories and forms the basis for clustering them. These techniques were used to analyse pedestrian movement. The results reveal interesting and useful spatiotemporal patterns and clusters of pedestrians exhibiting similar behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mechanisms and kinetics studies of the formation of levoglucosan and formaldehyde from anhydroglucose radical have been carried out theoretically in this paper. The geometries and frequencies of all the stationary points are calculated at the B3LYP/6-31+G(D,P) level based on quantum mechanics, Six elementary reactions are found, and three global reactions are involved. The variational transition-state rate constants for the elementary reactions are calculated within 450-1500 K. The global rate constants for every pathway are evaluated from the sum of the individual elementary reaction rate constants. The first-order Arrhenius expressions for these six elementary reactions and the three pathways are suggested. By comparing with the experimental data, computational methods without tunneling correction give good description for Path1 (the formation of levoglucosan); while methods with tunneling correction (zero-curvature tunneling and small-curvature tunneling correction) give good results for Path2 (the first possibility for the formation of formaldehyde), all the test methods give similar results for Path3 (the second possibility for the formation of formaldehyde), all the modeling results for Path3 are in good agreement with the experimental data, verifying that it is the most possible way for the formation of formaldehyde during cellulose pyrolysis. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapeutic inhibition of poly(ADP-ribose) polymerase (PARP), as monotherapy or to supplement the potencies of other agents, is a promising strategy in cancer treatment. We previously reported that the first PARP inhibitor to enter clinical trial, rucaparib (AG014699), induced vasodilation in vivo in xenografts, potentiating response to temozolomide. We now report that rucaparib inhibits the activity of the muscle contraction mediator myosin light chain kinase (MLCK) 10-fold more potently than its commercially available inhibitor ML-9. Moreover, rucaparib produces additive relaxation above the maximal degree achievable with ML-9, suggesting that MLCK inhibition is not solely responsible for dilation. Inhibition of nitric oxide synthesis using L-NMMA also failed to impact rucaparib's activity. Rucaparib contains the nicotinamide pharmacophore, suggesting it may inhibit other NAD+-dependent processes. NAD+ exerts P2 purinergic receptor-dependent inhibition of smooth muscle contraction. Indiscriminate blockade of the P2 purinergic receptors with suramin abrogated rucaparib-induced vasodilation in rat arterial tissue without affecting ML-9-evoked dilation, although the specific receptor subtypes responsible have not been unequivocally identified. Furthermore, dorsal window chamber and real time tumor vessel perfusion analyses in PARP-1-/- mice indicate a potential role for PARP in dilation of tumor-recruited vessels. Finally, rucaparib provoked relaxation in 70% of patient-derived tumor-associated vessels. These data provide tantalising evidence of the complexity of the mechanism underlying rucaparib-mediated vasodilation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demand for sustainable development has resulted in a rapid growth in wind power worldwide. Despite various approaches have been proposed to improve the accuracy and to overcome the uncertainties associated with traditional methods, the stochastic and variable nature of wind still remains the most challenging issue in accurately forecasting wind power. This paper presents a hybrid deterministic-probabilistic method where a temporally local ‘moving window’ technique is used in Gaussian Process to examine estimated forecasting errors. This temporally local Gaussian Process employs less measurement data while faster and better predicts wind power at two wind farms, one in the USA and the other in Ireland. Statistical analysis on the results shows that the method can substantially reduce the forecasting error while more likely generate Gaussian-distributed residuals, particularly for short-term forecast horizons due to its capability to handle the time-varying characteristics of wind power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Organic Rankine Cycle (ORC) is the most commonly used method for recovering energy from small sources of heat. The investigation of the ORC in supercritical condition is a new research area as it has a potential to generate high power and thermal efficiency in a waste heat recovery system. This paper presents a steady state ORC model in supercritical condition and its simulations with a real engine’s exhaust data. The key component of ORC, evaporator, is modelled using finite volume method, modelling of all other components of the waste heat recovery system such as pump, expander and condenser are also presented. The aim of this paper is to investigate the effects of mass flow rate and evaporator outlet temperature on the efficiency of the waste heat recovery process. Additionally, the necessity of maintaining an optimum evaporator outlet temperature is also investigated. Simulation results show that modification of mass flow rate is the key to changing the operating temperature at the evaporator outlet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Urothelial pathogenesis is a complex process driven by an underlying network of interconnected genes. The identification of novel genomic target regions and gene targets that drive urothelial carcinogenesis is crucial in order to improve our current limited understanding of urothelial cancer (UC) on the molecular level. The inference of genome-wide gene regulatory networks (GRN) from large-scale gene expression data provides a promising approach for a detailed investigation of the underlying network structure associated to urothelial carcinogenesis.

METHODS: In our study we inferred and compared three GRNs by the application of the BC3Net inference algorithm to large-scale transitional cell carcinoma gene expression data sets from Illumina RNAseq (179 samples), Illumina Bead arrays (165 samples) and Affymetrix Oligo microarrays (188 samples). We investigated the structural and functional properties of GRNs for the identification of molecular targets associated to urothelial cancer.

RESULTS: We found that the urothelial cancer (UC) GRNs show a significant enrichment of subnetworks that are associated with known cancer hallmarks including cell cycle, immune response, signaling, differentiation and translation. Interestingly, the most prominent subnetworks of co-located genes were found on chromosome regions 5q31.3 (RNAseq), 8q24.3 (Oligo) and 1q23.3 (Bead), which all represent known genomic regions frequently deregulated or aberated in urothelial cancer and other cancer types. Furthermore, the identified hub genes of the individual GRNs, e.g., HID1/DMC1 (tumor development), RNF17/TDRD4 (cancer antigen) and CYP4A11 (angiogenesis/ metastasis) are known cancer associated markers. The GRNs were highly dataset specific on the interaction level between individual genes, but showed large similarities on the biological function level represented by subnetworks. Remarkably, the RNAseq UC GRN showed twice the proportion of significant functional subnetworks. Based on our analysis of inferential and experimental networks the Bead UC GRN showed the lowest performance compared to the RNAseq and Oligo UC GRNs.

CONCLUSION: To our knowledge, this is the first study investigating genome-scale UC GRNs. RNAseq based gene expression data is the data platform of choice for a GRN inference. Our study offers new avenues for the identification of novel putative diagnostic targets for subsequent studies in bladder tumors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. The Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) began as a public spectroscopic survey in April 2012. PESSTO classifies transients from publicly available sources and wide-field surveys, and selects science targets for detailed spectroscopic and photometric follow-up. PESSTO runs for nine months of the year, January - April and August - December inclusive, and typically has allocations of 10 nights per month. 

Aims. We describe the data reduction strategy and data products that are publicly available through the ESO archive as the Spectroscopic Survey data release 1 (SSDR1). 

Methods. PESSTO uses the New Technology Telescope with the instruments EFOSC2 and SOFI to provide optical and NIR spectroscopy and imaging. We target supernovae and optical transients brighter than 20.5<sup>m</sup> for classification. Science targets are selected for follow-up based on the PESSTO science goal of extending knowledge of the extremes of the supernova population. We use standard EFOSC2 set-ups providing spectra with resolutions of 13-18 Å between 3345-9995 Å. A subset of the brighter science targets are selected for SOFI spectroscopy with the blue and red grisms (0.935-2.53 μm and resolutions 23-33 Å) and imaging with broadband JHK<inf>s</inf> filters. 

Results. This first data release (SSDR1) contains flux calibrated spectra from the first year (April 2012-2013). A total of 221 confirmed supernovae were classified, and we released calibrated optical spectra and classifications publicly within 24 h of the data being taken (via WISeREP). The data in SSDR1 replace those released spectra. They have more reliable and quantifiable flux calibrations, correction for telluric absorption, and are made available in standard ESO Phase 3 formats. We estimate the absolute accuracy of the flux calibrations for EFOSC2 across the whole survey in SSDR1 to be typically ∼15%, although a number of spectra will have less reliable absolute flux calibration because of weather and slit losses. Acquisition images for each spectrum are available which, in principle, can allow the user to refine the absolute flux calibration. The standard NIR reduction process does not produce high accuracy absolute spectrophotometry but synthetic photometry with accompanying JHK<inf>s</inf> imaging can improve this. Whenever possible, reduced SOFI images are provided to allow this. 

Conclusions. Future data releases will focus on improving the automated flux calibration of the data products. The rapid turnaround between discovery and classification and access to reliable pipeline processed data products has allowed early science papers in the first few months of the survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extrusion is one of the major methods for processing polymeric materials and the thermal homogeneity of the process output is a major concern for manufacture of high quality extruded products. Therefore, accurate process thermal monitoring and control are important for product quality control. However, most industrial extruders use single point thermocouples for the temperature monitoring/control although their measurements are highly affected by the barrel metal wall temperature. Currently, no industrially established thermal profile measurement technique is available. Furthermore, it has been shown that the melt temperature changes considerably with the die radial position and hence point/bulk measurements are not sufficient for monitoring and control of the temperature across the melt flow. The majority of process thermal control methods are based on linear models which are not capable of dealing with process nonlinearities. In this work, the die melt temperature profile of a single screw extruder was monitored by a thermocouple mesh technique. The data obtained was used to develop a novel approach of modelling the extruder die melt temperature profile under dynamic conditions (i.e. for predicting the die melt temperature profile in real-time). These newly proposed models were in good agreement with the measured unseen data. They were then used to explore the effects of process settings, material and screw geometry on the die melt temperature profile. The results showed that the process thermal homogeneity was affected in a complex manner by changing the process settings, screw geometry and material.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Bisphosphonates have profound effects on bone physiology, and could modify the process of metastasis. We undertook collaborative meta-analyses to clarify the risks and benefits of adjuvant bisphosphonate treatment in breast cancer.

METHODS: We sought individual patient data from all unconfounded trials in early breast cancer that randomised between bisphosphonate and control. Primary outcomes were recurrence, distant recurrence, and breast cancer mortality. Primary subgroup investigations were site of first distant recurrence (bone or other), menopausal status (postmenopausal [combining natural and artificial] or not), and bisphosphonate class (aminobisphosphonate [eg, zoledronic acid, ibandronate, pamidronate] or other [ie, clodronate]). Intention-to-treat log-rank methods yielded bisphosphonate versus control first-event rate ratios (RRs).

FINDINGS: We received data on 18 766 women (18 206 [97%] in trials of 2-5 years of bisphosphonate) with median follow-up 5·6 woman-years, 3453 first recurrences, and 2106 subsequent deaths. Overall, the reductions in recurrence (RR 0·94, 95% CI 0·87-1·01; 2p=0·08), distant recurrence (0·92, 0·85-0·99; 2p=0·03), and breast cancer mortality (0·91, 0·83-0·99; 2p=0·04) were of only borderline significance, but the reduction in bone recurrence was more definite (0·83, 0·73-0·94; 2p=0·004). Among premenopausal women, treatment had no apparent effect on any outcome, but among 11 767 postmenopausal women it produced highly significant reductions in recurrence (RR 0·86, 95% CI 0·78-0·94; 2p=0·002), distant recurrence (0·82, 0·74-0·92; 2p=0·0003), bone recurrence (0·72, 0·60-0·86; 2p=0·0002), and breast cancer mortality (0·82, 0·73-0·93; 2p=0·002). Even for bone recurrence, however, the heterogeneity of benefit was barely significant by menopausal status (2p=0·06 for trend with menopausal status) or age (2p=0·03), and it was non-significant by bisphosphonate class, treatment schedule, oestrogen receptor status, nodes, tumour grade, or concomitant chemotherapy. No differences were seen in non-breast cancer mortality. Bone fractures were reduced (RR 0·85, 95% CI 0·75-0·97; 2p=0·02).

INTERPRETATION: Adjuvant bisphosphonates reduce the rate of breast cancer recurrence in the bone and improve breast cancer survival, but there is definite benefit only in women who were postmenopausal when treatment began.

FUNDING: Cancer Research UK, Medical Research Council.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Jayne Tierney and colleagues offer guidance on how to spot a well-designed and well-conducted individual participant data meta-analysis.

Summary Points 

• Systematic reviews are most commonly based on aggregate data extracted from publications or obtained from trial investigators. 

• Systematic reviews involving the central collection and analysis of individual participant data (IPD) usually are larger-scale, international, collaborative projects that can bring about substantial improvements to the quantity and quality of data, give greater scope in the analyses, and provide more detailed and robust results. 

• The process of collecting, checking, and analysing IPD is more complex than for aggregate data, and not all IPD meta-analyses are done to the same standard, making it difficult for researchers, clinicians, patients, policy makers, funders, and publishers to judge their quality. 

• Following our step-by-step guide will help reviewers and users of IPD meta-analyses to understand them better and recognise those that are well designed and conducted and so help ensure that policy, practice, and research are informed by robust evidence about the effects of interventions.