74 resultados para large-scale systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes large scale tests conducted on a novel unglazed solar air collector system. The proposed system, referred to as a back-pass solar collector (BPSC), has on-site installation and aesthetic advantages over conventional unglazed transpired solar collectors (UTSC) as it is fully integrated within a standard insulated wall panel. This paper presents the results obtained from monitoring a BPSC wall panel over one year. Measurements of temperature, wind velocity and solar irradiance were taken at multiple air mass flow rates. It is shown that the length of the collector cavities has a direct impact on the efficiency of the system. It is also shown that beyond a height-to-flow ratio of 0.023m/m<sup>3</sup>/hr/m<sup>2</sup>, no additional heat output is obtained by increasing the collector height for the experimental setup in this study, but these numbers would obviously be different if the experimental setup or test environment (e.g. location and climate) change. An equation for predicting the temperature rise of the BPSC is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experimental study measuring the performance and wake characteristics of a 1:10th scale horizontal axis turbine in steady uniform flow conditions is presented in this paper.
Large scale towing tests conducted in a lake were devised to model the performance of the tidal turbine and measure the wake produced. As a simplification of the marine environment, towing the turbine in a lake provides approximately steady, uniform inflow conditions. A 16m long x 6m wide catamaran was constructed for the test programme. This doubled as a towing rig and flow measurement platform, providing a fixed frame of reference for measurements in the wake of a horizontal axis tidal turbine. Velocity mapping was conducted using Acoustic Doppler Velocimeters.
The results indicate varying the inflow speed yielded little difference in the efficiency of the turbine or the wake velocity deficit characteristics provided the same tip speed ratio is used. Increasing the inflow velocity from 0.9 m/s to 1.2 m/s influenced the turbulent wake characteristics more markedly. The results also demonstrate that the flow field in the wake of a horizontal axis tidal turbine is strongly affected by the turbine support structure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recommending users for a new social network user to follow is a topic of interest at present. The existing approaches rely on using various types of information about the new user to determine recommended users who have similar interests to the new user. However, this presents a problem when a new user joins a social network, who is yet to have any interaction on the social network. In this paper we present a particular type of conversational recommendation approach, critiquing-based recommendation, to solve the cold start problem. We present a critiquing-based recommendation system, called CSFinder, to recommend users for a new user to follow. A traditional critiquing-based recommendation system allows a user to critique a feature of a recommended item at a time and gradually leads the user to the target recommendation. However this may require a lengthy recommendation session. CSFinder aims to reduce the session length by taking a case-based reasoning approach. It selects relevant recommendation sessions of past users that match the recommendation session of the current user to shortcut the current recommendation session. It selects relevant recommendation sessions from a case base that contains the successful recommendation sessions of past users. A past recommendation session can be selected if it contains recommended items and critiques that sufficiently overlap with the ones in the current session. Our experimental results show that CSFinder has significantly shorter sessions than the ones of an Incremental Critiquing system, which is a baseline critiquing-based recommendation system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the lack of a shear-rich tachocline region, low-mass fully convective (FC) stars are capable of generating strong magnetic fields, indicating that a dynamo mechanism fundamentally different from the solar dynamo is at work in these objects. We present a self-consistent three-dimensional model of magnetic field generation in low-mass FC stars. The model utilizes the anelastic magnetohydrodynamic equations to simulate compressible convection in a rotating sphere. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scale fields that carry most of the magnetic flux. The Zeeman–Doppler-Imaging technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass FC stars.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most traditional data mining algorithms struggle to cope with the sheer scale of data efficiently. In this paper, we propose a general framework to accelerate existing clustering algorithms to cluster large-scale datasets which contain large numbers of attributes, items, and clusters. Our framework makes use of locality sensitive hashing (LSH) to significantly reduce the cluster search space. We also theoretically prove that our framework has a guaranteed error bound in terms of the clustering quality. This framework can be applied to a set of centroid-based clustering algorithms that assign an object to the most similar cluster, and we adopt the popular K-Modes categorical clustering algorithm to present how the framework can be applied. We validated our framework with five synthetic datasets and a real world Yahoo! Answers dataset. The experimental results demonstrate that our framework is able to speed up the existing clustering algorithm between factors of 2 and 6, while maintaining comparable cluster purity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate secure device-to-device (D2D) communication in energy harvesting large-scale cognitive cellular networks. The energy constrained D2D transmitter harvests energy from multiantenna equipped power beacons (PBs), and communicates with the corresponding receiver using the spectrum of the primary base stations (BSs). We introduce a power transfer model and an information signal model to enable wireless energy harvesting and secure information transmission. In the power transfer model, three wireless power transfer (WPT) policies are proposed: 1) co-operative power beacons (CPB) power transfer, 2) best power beacon (BPB) power transfer, and 3) nearest power beacon (NPB) power transfer. To characterize the power transfer reliability of the proposed three policies, we derive new expressions for the exact power outage probability. Moreover, the analysis of the power outage probability is extended to the case when PBs are equipped with large antenna arrays. In the information signal model, we present a new comparative framework with two receiver selection schemes: 1) best receiver selection (BRS), where the receiver with the strongest channel is selected; and 2) nearest receiver selection (NRS), where the nearest receiver is selected. To assess the secrecy performance, we derive new analytical expressions for the secrecy outage probability and the secrecy throughput considering the two receiver selection schemes using the proposed WPT policies. We presented Monte carlo simulation results to corroborate our analysis and show: 1) secrecy performance improves with increasing densities of PBs and D2D receivers due to larger multiuser diversity gain; 2) CPB achieves better secrecy performance than BPB and NPB but consumes more power; and 3) BRS achieves better secrecy performance than NRS but demands more instantaneous feedback and overhead. A pivotal conclusion- is reached that with increasing number of antennas at PBs, NPB offers a comparable secrecy performance to that of BPB but with a lower complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential of IR absorption and Raman spectroscopy for rapid identification of novel psychoactive substances (NPS) has been tested using a set of 221 unsorted seized samples suspected of containing NPS. Both IR and Raman spectra showed large variation between the different sub-classifications of NPS and smaller, but still distinguishable, differences between closely related compounds within the same class. In initial tests, screening the samples using spectral searching against a limited reference library allowed only 41% of the samples to be fully identified. The limiting factor in the identification was the large number of active compounds in the seized samples for which no reference vibrational data were available in the libraries rather than poor spectral quality. Therefore, when 33 of these compounds were independently identified by NMR and mass spectrometry and their spectra used to extend the libraries, the percentage of samples identified by IR and Raman screening alone increased to 76%, with only 7% of samples having no identifiable constituents. This study, which is the largest of its type ever carried out, therefore demonstrates that this approach of detecting non-matching samples and then identifying them using standard analytical methods has considerable potential in NPS screening since it allows rapid identification of the constituents of the majority of street quality samples. Only one complete feedback cycle was carried out in this study but there is clearly the potential to carry out continuous identification/updating when this system is used in operational settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Esophageal adenocarcinoma (EA) is one of the fastest rising cancers in western countries. Barrett’s Esophagus (BE) is the premalignant precursor of EA. However, only a subset of BE patients develop EA, which complicates the clinical management in the absence of valid predictors. Genetic risk factors for BE and EA are incompletely understood. This study aimed to identify novel genetic risk factors for BE and EA.Methods: Within an international consortium of groups involved in the genetics of BE/EA, we performed the first meta-analysis of all genome-wide association studies (GWAS) available, involving 6,167 BE patients, 4,112 EA patients, and 17,159 representative controls, all of European ancestry, genotyped on Illumina high-density SNP-arrays, collected from four separate studies within North America, Europe, and Australia. Meta-analysis was conducted using the fixed-effects inverse variance-weighting approach. We used the standard genome-wide significant threshold of 5×10-8 for this study. We also conducted an association analysis following reweighting of loci using an approach that investigates annotation enrichment among the genome-wide significant loci. The entire GWAS-data set was also analyzed using bioinformatics approaches including functional annotation databases as well as gene-based and pathway-based methods in order to identify pathophysiologically relevant cellular pathways.Findings: We identified eight new associated risk loci for BE and EA, within or near the CFTR (rs17451754, P=4·8×10-10), MSRA (rs17749155, P=5·2×10-10), BLK (rs10108511, P=2·1×10-9), KHDRBS2 (rs62423175, P=3·0×10-9), TPPP/CEP72 (rs9918259, P=3·2×10-9), TMOD1 (rs7852462, P=1·5×10-8), SATB2 (rs139606545, P=2·0×10-8), and HTR3C/ABCC5 genes (rs9823696, P=1·6×10-8). A further novel risk locus at LPA (rs12207195, posteriori probability=0·925) was identified after re-weighting using significantly enriched annotations. This study thereby doubled the number of known risk loci. The strongest disease pathways identified (P<10-6) belong to muscle cell differentiation and to mesenchyme development/differentiation, which fit with current pathophysiological BE/EA concepts. To our knowledge, this study identified for the first time an EA-specific association (rs9823696, P=1·6×10-8) near HTR3C/ABCC5 which is independent of BE development (P=0·45).Interpretation: The identified disease loci and pathways reveal new insights into the etiology of BE and EA. Furthermore, the EA-specific association at HTR3C/ABCC5 may constitute a novel genetic marker for the prediction of transition from BE to EA. Mutations in CFTR, one of the new risk loci identified in this study, cause cystic fibrosis (CF), the most common recessive disorder in Europeans. Gastroesophageal reflux (GER) belongs to the phenotypic CF-spectrum and represents the main risk factor for BE/EA. Thus, the CFTR locus may trigger a common GER-mediated pathophysiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graph analytics is an important and computationally demanding class of data analytics. It is essential to balance scalability, ease-of-use and high performance in large scale graph analytics. As such, it is necessary to hide the complexity of parallelism, data distribution and memory locality behind an abstract interface. The aim of this work is to build a scalable graph analytics framework that does not demand significant parallel programming experience based on NUMA-awareness.
The realization of such a system faces two key problems:
(i)~how to develop a scale-free parallel programming framework that scales efficiently across NUMA domains; (ii)~how to efficiently apply graph partitioning in order to create separate and largely independent work items that can be distributed among threads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many scientific applications are programmed using hybrid programming models that use both message passing and shared memory, due to the increasing prevalence of large-scale systems with multicore, multisocket nodes. Previous work has shown that energy efficiency can be improved using software-controlled execution schemes that consider both the programming model and the power-aware execution capabilities of the system. However, such approaches have focused on identifying optimal resource utilization for one programming model, either shared memory or message passing, in isolation. The potential solution space, thus the challenge, increases substantially when optimizing hybrid models since the possible resource configurations increase exponentially. Nonetheless, with the accelerating adoption of hybrid programming models, we increasingly need improved energy efficiency in hybrid parallel applications on large-scale systems. In this work, we present new software-controlled execution schemes that consider the effects of dynamic concurrency throttling (DCT) and dynamic voltage and frequency scaling (DVFS) in the context of hybrid programming models. Specifically, we present predictive models and novel algorithms based on statistical analysis that anticipate application power and time requirements under different concurrency and frequency configurations. We apply our models and methods to the NPB MZ benchmarks and selected applications from the ASC Sequoia codes. Overall, we achieve substantial energy savings (8.74 percent on average and up to 13.8 percent) with some performance gain (up to 7.5 percent) or negligible performance loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The BDI architecture, where agents are modelled based on their beliefs, desires and intentions, provides a practical approach to develop large scale systems. However, it is not well suited to model complex Supervisory Control And Data Acquisition (SCADA) systems pervaded by uncertainty. In this paper we address this issue by extending the operational semantics of Can(Plan) into Can(Plan)+. We start by modelling the beliefs of an agent as a set of epistemic states where each state, possibly using a different representation, models part of the agent's beliefs. These epistemic states are stratified to make them commensurable and to reason about the uncertain beliefs of the agent. The syntax and semantics of a BDI agent are extended accordingly and we identify fragments with computationally efficient semantics. Finally, we examine how primitive actions are affected by uncertainty and we define an appropriate form of lookahead planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.