75 resultados para Large Scale Virtual Environments


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes large scale tests conducted on a novel unglazed solar air collector system. The proposed system, referred to as a back-pass solar collector (BPSC), has on-site installation and aesthetic advantages over conventional unglazed transpired solar collectors (UTSC) as it is fully integrated within a standard insulated wall panel. This paper presents the results obtained from monitoring a BPSC wall panel over one year. Measurements of temperature, wind velocity and solar irradiance were taken at multiple air mass flow rates. It is shown that the length of the collector cavities has a direct impact on the efficiency of the system. It is also shown that beyond a height-to-flow ratio of 0.023m/m<sup>3</sup>/hr/m<sup>2</sup>, no additional heat output is obtained by increasing the collector height for the experimental setup in this study, but these numbers would obviously be different if the experimental setup or test environment (e.g. location and climate) change. An equation for predicting the temperature rise of the BPSC is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experimental study measuring the performance and wake characteristics of a 1:10th scale horizontal axis turbine in steady uniform flow conditions is presented in this paper.
Large scale towing tests conducted in a lake were devised to model the performance of the tidal turbine and measure the wake produced. As a simplification of the marine environment, towing the turbine in a lake provides approximately steady, uniform inflow conditions. A 16m long x 6m wide catamaran was constructed for the test programme. This doubled as a towing rig and flow measurement platform, providing a fixed frame of reference for measurements in the wake of a horizontal axis tidal turbine. Velocity mapping was conducted using Acoustic Doppler Velocimeters.
The results indicate varying the inflow speed yielded little difference in the efficiency of the turbine or the wake velocity deficit characteristics provided the same tip speed ratio is used. Increasing the inflow velocity from 0.9 m/s to 1.2 m/s influenced the turbulent wake characteristics more markedly. The results also demonstrate that the flow field in the wake of a horizontal axis tidal turbine is strongly affected by the turbine support structure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recommending users for a new social network user to follow is a topic of interest at present. The existing approaches rely on using various types of information about the new user to determine recommended users who have similar interests to the new user. However, this presents a problem when a new user joins a social network, who is yet to have any interaction on the social network. In this paper we present a particular type of conversational recommendation approach, critiquing-based recommendation, to solve the cold start problem. We present a critiquing-based recommendation system, called CSFinder, to recommend users for a new user to follow. A traditional critiquing-based recommendation system allows a user to critique a feature of a recommended item at a time and gradually leads the user to the target recommendation. However this may require a lengthy recommendation session. CSFinder aims to reduce the session length by taking a case-based reasoning approach. It selects relevant recommendation sessions of past users that match the recommendation session of the current user to shortcut the current recommendation session. It selects relevant recommendation sessions from a case base that contains the successful recommendation sessions of past users. A past recommendation session can be selected if it contains recommended items and critiques that sufficiently overlap with the ones in the current session. Our experimental results show that CSFinder has significantly shorter sessions than the ones of an Incremental Critiquing system, which is a baseline critiquing-based recommendation system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the lack of a shear-rich tachocline region, low-mass fully convective (FC) stars are capable of generating strong magnetic fields, indicating that a dynamo mechanism fundamentally different from the solar dynamo is at work in these objects. We present a self-consistent three-dimensional model of magnetic field generation in low-mass FC stars. The model utilizes the anelastic magnetohydrodynamic equations to simulate compressible convection in a rotating sphere. A distributed dynamo working in the model spontaneously produces a dipole-dominated surface magnetic field of the observed strength. The interaction of this field with the turbulent convection in outer layers shreds it, producing small-scale fields that carry most of the magnetic flux. The Zeeman–Doppler-Imaging technique applied to synthetic spectropolarimetric data based on our model recovers most of the large-scale field. Our model simultaneously reproduces the morphology and magnitude of the large-scale field as well as the magnitude of the small-scale field observed on low-mass FC stars.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most traditional data mining algorithms struggle to cope with the sheer scale of data efficiently. In this paper, we propose a general framework to accelerate existing clustering algorithms to cluster large-scale datasets which contain large numbers of attributes, items, and clusters. Our framework makes use of locality sensitive hashing (LSH) to significantly reduce the cluster search space. We also theoretically prove that our framework has a guaranteed error bound in terms of the clustering quality. This framework can be applied to a set of centroid-based clustering algorithms that assign an object to the most similar cluster, and we adopt the popular K-Modes categorical clustering algorithm to present how the framework can be applied. We validated our framework with five synthetic datasets and a real world Yahoo! Answers dataset. The experimental results demonstrate that our framework is able to speed up the existing clustering algorithm between factors of 2 and 6, while maintaining comparable cluster purity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing scale of Multiple-Input Multiple- Output (MIMO) topologies employed in forthcoming wireless communications standards presents a substantial implementation challenge to designers of embedded baseband signal processing architectures for MIMO transceivers. Specifically the increased scale of such systems has a substantial impact on the perfor- mance/cost balance of detection algorithms for these systems. Whilst in small-scale systems Sphere Decoding (SD) algorithms offer the best quasi-ML performance/cost balance, in larger systems heuristic detectors, such Tabu-Search (TS) detectors are superior. This paper addresses a dearth of research in architectures for TS-based MIMO detection, presenting the first known realisations of TS detectors for 4 × 4 and 10 × 10 MIMO systems. To the best of the authors’ knowledge, these are the largest single-chip detectors on record.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate secure device-to-device (D2D) communication in energy harvesting large-scale cognitive cellular networks. The energy constrained D2D transmitter harvests energy from multiantenna equipped power beacons (PBs), and communicates with the corresponding receiver using the spectrum of the primary base stations (BSs). We introduce a power transfer model and an information signal model to enable wireless energy harvesting and secure information transmission. In the power transfer model, three wireless power transfer (WPT) policies are proposed: 1) co-operative power beacons (CPB) power transfer, 2) best power beacon (BPB) power transfer, and 3) nearest power beacon (NPB) power transfer. To characterize the power transfer reliability of the proposed three policies, we derive new expressions for the exact power outage probability. Moreover, the analysis of the power outage probability is extended to the case when PBs are equipped with large antenna arrays. In the information signal model, we present a new comparative framework with two receiver selection schemes: 1) best receiver selection (BRS), where the receiver with the strongest channel is selected; and 2) nearest receiver selection (NRS), where the nearest receiver is selected. To assess the secrecy performance, we derive new analytical expressions for the secrecy outage probability and the secrecy throughput considering the two receiver selection schemes using the proposed WPT policies. We presented Monte carlo simulation results to corroborate our analysis and show: 1) secrecy performance improves with increasing densities of PBs and D2D receivers due to larger multiuser diversity gain; 2) CPB achieves better secrecy performance than BPB and NPB but consumes more power; and 3) BRS achieves better secrecy performance than NRS but demands more instantaneous feedback and overhead. A pivotal conclusion- is reached that with increasing number of antennas at PBs, NPB offers a comparable secrecy performance to that of BPB but with a lower complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential of IR absorption and Raman spectroscopy for rapid identification of novel psychoactive substances (NPS) has been tested using a set of 221 unsorted seized samples suspected of containing NPS. Both IR and Raman spectra showed large variation between the different sub-classifications of NPS and smaller, but still distinguishable, differences between closely related compounds within the same class. In initial tests, screening the samples using spectral searching against a limited reference library allowed only 41% of the samples to be fully identified. The limiting factor in the identification was the large number of active compounds in the seized samples for which no reference vibrational data were available in the libraries rather than poor spectral quality. Therefore, when 33 of these compounds were independently identified by NMR and mass spectrometry and their spectra used to extend the libraries, the percentage of samples identified by IR and Raman screening alone increased to 76%, with only 7% of samples having no identifiable constituents. This study, which is the largest of its type ever carried out, therefore demonstrates that this approach of detecting non-matching samples and then identifying them using standard analytical methods has considerable potential in NPS screening since it allows rapid identification of the constituents of the majority of street quality samples. Only one complete feedback cycle was carried out in this study but there is clearly the potential to carry out continuous identification/updating when this system is used in operational settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Esophageal adenocarcinoma (EA) is one of the fastest rising cancers in western countries. Barrett’s Esophagus (BE) is the premalignant precursor of EA. However, only a subset of BE patients develop EA, which complicates the clinical management in the absence of valid predictors. Genetic risk factors for BE and EA are incompletely understood. This study aimed to identify novel genetic risk factors for BE and EA.Methods: Within an international consortium of groups involved in the genetics of BE/EA, we performed the first meta-analysis of all genome-wide association studies (GWAS) available, involving 6,167 BE patients, 4,112 EA patients, and 17,159 representative controls, all of European ancestry, genotyped on Illumina high-density SNP-arrays, collected from four separate studies within North America, Europe, and Australia. Meta-analysis was conducted using the fixed-effects inverse variance-weighting approach. We used the standard genome-wide significant threshold of 5×10-8 for this study. We also conducted an association analysis following reweighting of loci using an approach that investigates annotation enrichment among the genome-wide significant loci. The entire GWAS-data set was also analyzed using bioinformatics approaches including functional annotation databases as well as gene-based and pathway-based methods in order to identify pathophysiologically relevant cellular pathways.Findings: We identified eight new associated risk loci for BE and EA, within or near the CFTR (rs17451754, P=4·8×10-10), MSRA (rs17749155, P=5·2×10-10), BLK (rs10108511, P=2·1×10-9), KHDRBS2 (rs62423175, P=3·0×10-9), TPPP/CEP72 (rs9918259, P=3·2×10-9), TMOD1 (rs7852462, P=1·5×10-8), SATB2 (rs139606545, P=2·0×10-8), and HTR3C/ABCC5 genes (rs9823696, P=1·6×10-8). A further novel risk locus at LPA (rs12207195, posteriori probability=0·925) was identified after re-weighting using significantly enriched annotations. This study thereby doubled the number of known risk loci. The strongest disease pathways identified (P<10-6) belong to muscle cell differentiation and to mesenchyme development/differentiation, which fit with current pathophysiological BE/EA concepts. To our knowledge, this study identified for the first time an EA-specific association (rs9823696, P=1·6×10-8) near HTR3C/ABCC5 which is independent of BE development (P=0·45).Interpretation: The identified disease loci and pathways reveal new insights into the etiology of BE and EA. Furthermore, the EA-specific association at HTR3C/ABCC5 may constitute a novel genetic marker for the prediction of transition from BE to EA. Mutations in CFTR, one of the new risk loci identified in this study, cause cystic fibrosis (CF), the most common recessive disorder in Europeans. Gastroesophageal reflux (GER) belongs to the phenotypic CF-spectrum and represents the main risk factor for BE/EA. Thus, the CFTR locus may trigger a common GER-mediated pathophysiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Graph analytics is an important and computationally demanding class of data analytics. It is essential to balance scalability, ease-of-use and high performance in large scale graph analytics. As such, it is necessary to hide the complexity of parallelism, data distribution and memory locality behind an abstract interface. The aim of this work is to build a scalable graph analytics framework that does not demand significant parallel programming experience based on NUMA-awareness.
The realization of such a system faces two key problems:
(i)~how to develop a scale-free parallel programming framework that scales efficiently across NUMA domains; (ii)~how to efficiently apply graph partitioning in order to create separate and largely independent work items that can be distributed among threads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Union continues to exert a large influence on the direction of member states energy policy. The 2020 targets for renewable energy integration have had significant impact on the operation of current power systems, forcing a rapid change from fossil fuel dominated systems to those with high levels of renewable power. Additionally, the overarching aim of an internal energy market throughout Europe has and will continue to place importance on multi-jurisdictional co-operation regarding energy supply. Combining these renewable energy and multi-jurisdictional supply goals results in a complicated multi-vector energy system, where the understanding of interactions between fossil fuels, renewable energy, interconnection and economic power system operation is increasingly important. This paper provides a novel and systematic methodology to fully understand the changing dynamics of interconnected energy systems from a gas and power perspective. A fully realistic unit commitment and economic dispatch model of the 2030 power systems in Great Britain and Ireland, combined with a representative gas transmission energy flow model is developed. The importance of multi-jurisdictional integrated energy system operation in one of the most strategically important renewable energy regions is demonstrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale multiple-input multiple-output (MIMO) communication systems can bring substantial improvement in spectral efficiency and/or energy efficiency, due to the excessive degrees-of-freedom and huge array gain. However, large-scale MIMO is expected to deploy lower-cost radio frequency (RF) components, which are particularly prone to hardware impairments. Unfortunately, compensation schemes are not able to remove the impact of hardware impairments completely, such that a certain amount of residual impairments always exists. In this paper, we investigate the impact of residual transmit RF impairments (RTRI) on the spectral and energy efficiency of training-based point-to-point large-scale MIMO systems, and seek to determine the optimal training length and number of antennas which maximize the energy efficiency. We derive deterministic equivalents of the signal-to-noise-and-interference ratio (SINR) with zero-forcing (ZF) receivers, as well as the corresponding spectral and energy efficiency, which are shown to be accurate even for small number of antennas. Through an iterative sequential optimization, we find that the optimal training length of systems with RTRI can be smaller compared to ideal hardware systems in the moderate SNR regime, while larger in the high SNR regime. Moreover, it is observed that RTRI can significantly decrease the optimal number of transmit and receive antennas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two-dimensional (2D) materials have generated great interest in the last few years as a new toolbox for electronics. This family of materials includes, among others, metallic graphene, semiconducting transition metal dichalcogenides (such as MoS2) and insulating Boron Nitride. These materials and their heterostructures offer excellent mechanical flexibility, optical transparency and favorable transport properties for realizing electronic, sensing and optical systems on arbitrary surfaces. In this work, we develop several etch stop layer technologies that allow the fabrication of complex 2D devices and present for the first time the large scale integration of graphene with molybdenum disulfide (MoS2) , both grown using the fully scalable CVD technique. Transistor devices and logic circuits with MoS2 channel and graphene as contacts and interconnects are constructed and show high performances. In addition, the graphene/MoS2 heterojunction contact has been systematically compared with MoS2-metal junctions experimentally and studied using density functional theory. The tunability of the graphene work function significantly improves the ohmic contact to MoS2. These high-performance large-scale devices and circuits based on 2D heterostructure pave the way for practical flexible transparent electronics in the future. The authors acknowledge financial support from the Office of Naval Research (ONR) Young Investigator Program, the ONR GATE MURI program, and the Army Research Laboratory. This research has made use of the MI.