862 resultados para large-scale network
Resumo:
Social networking mediated by web sites is a relatively new phenomenon and as with all technological innovations there continues to be a period of both technical and social adjustment to fit the services in with people’s behaviours, and for people to adjust their practices in the light of the affordances provided by the technology. Social networking benefits strongly from large scale availability. Users gain greater benefit from social networking services when more of their friends are using them.This applies in social terms, but also in eLearning and professional networks. The network effect provides one explanation for the popularity of internet based social networking sites (SNS) because the number of connections between people which can be maintained by using them is greatly increased in comparison to the networks available before the internet. The ability of users to determine how much they trust information available to them from contacts within their social network is important in almost all modes of use. As sources of information on a range of topics from academic to shopping advice, the level of trust which a user can put in other nodes is a key aspect of the utility of the system.
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
Large magnitude explosive eruptions are the result of the rapid and large-scale transport of silicic magma stored in the Earth's crust, but the mechanics of erupting teratonnes of silicic magma remain poorly understood. Here, we demonstrate that the combined effect of local crustal extension and magma chamber overpressure can sustain linear dyke-fed explosive eruptions with mass fluxes in excess of 10^10 kg/s from shallow-seated (4–6 km depth) chambers during moderate extensional stresses. Early eruption column collapse is facilitated with eruption duration of the order of few days with an intensity of at least one order of magnitude greater than the largest eruptions in the 20th century. The conditions explored in this study are one way in which high mass eruption rates can be achieved to feed large explosive eruptions. Our results corroborate geological and volcanological evidences from volcano-tectonic complexes such as the Sierra Madre Occidental (Mexico) and the Taupo Volcanic Zone (New Zealand).
Resumo:
A method has been established for observing the internal structure of the network component of polymer-stabilised liquid crystals. In situ photopolymerisation of a mesogenic diacrylate monomer using ultraviolet light leads to a sparse network (∼1 wt%) within a nematic host. Following polymerisation, the host was removed through dissolution in heptane, revealing the network. In order to observe a cross-section through the network, it was embedded in a resin and then sectioned using an ultramicrotome. However, imaging of the network was not possible due to poor contrast. To improve this, several reagents were used for network staining, but only one was successful: bromine. The use of a Melinex-resin composite for sectioning was also found to be advantageous. Imaging of the network using transmission electron microscopy revealed solid “droplets” of width 0.07–0.20 μm, possessing an open, yet homogeneous structure, with no evidence for any large-scale internal structures.
Resumo:
The work presented in this report is part of the effort to define the landscape state and diversity indicator in the frame of COM (2006) 508 “Development of agri-environmental indicators for monitoring the integration of environmental concerns into the common agricultural policy”. The Communication classifies the indicators according to their level of development, which, for the landscape indicator is “in need of substantial improvements in order to become fully operational”. For this reason a full re-definition of the indicator has been carried out, following the initial proposal presented in the frame of the IRENA operation (“Indicator Reporting on the Integration of Environmental Concerns into Agricultural Policy”). The new proposal for the landscape state and diversity indicator is structured in three components: the first concerns the degree of naturalness, the second landscape structure, the third the societal appreciation of the rural landscape. While the first two components rely on a strong bulk of existing literature, the development of the methodology has made evident the need for further analysis of the third component, which is based on a newly proposed top-down approach. This report presents an in-depth analysis of such component of the indicator, and the effort to include a social dimension in large scale landscape assessment.
Resumo:
We present the updated Holocene section of the Sofular Cave record from the southernBlackSeacoast (northern Turkey); an area with considerably different present-day climate compared to that of the neighboring Eastern Mediterranean region. Stalagmite δ13C, growth rates and initial (234U/238U) ratios provide information about hydrological changes above the cave; and prove to be more useful than δ18O for deciphering Holocene climatic variations. Between ∼9.6 and 5.4 ka BP (despite a pause from ∼8.4 to 7.8 ka BP), the Sofular record indicates a remarkable increase in rainfall amount and intensity, in line with other paleoclimate studies in the Eastern Mediterranean. During that period, enhanced summertime insolation either produced much stronger storms in the following fall and winter through high sea surface temperatures, or it invoked a regional summer monsoon circulation and rainfall. We suggest that one or both of these climatic mechanisms led to a coupling of the BlackSea and the Mediterranean rainfall regimes at that time, which can explain the observed proxy signals. However, there are discrepancies among the Eastern Mediterranean records in terms of the timing of this wet period; implying that changes were probably not always occurring through the same mechanism. Nevertheless, the Sofular Cave record does provide hints and bring about new questions about the connection between regional and large scale climates, highlighting the need for a more extensive network of high quality paleoclimate records to better understand Holoceneclimate.
Resumo:
In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
Relating the measurable, large scale, effects of anaesthetic agents to their molecular and cellular targets of action is necessary to better understand the principles by which they affect behavior, as well as enabling the design and evaluation of more effective agents and the better clinical monitoring of existing and future drugs. Volatile and intravenous general anaesthetic agents (GAs) are now known to exert their effects on a variety of protein targets, the most important of which seem to be the neuronal ion channels. It is hence unlikely that anaesthetic effect is the result of a unitary mechanism at the single cell level. However, by altering the behavior of ion channels GAs are believed to change the overall dynamics of distributed networks of neurons. This disruption of regular network activity can be hypothesized to cause the hypnotic and analgesic effects of GAs and may well present more stereotypical characteristics than its underlying microscopic causes. Nevertheless, there have been surprisingly few theories that have attempted to integrate, in a quantitative manner, the empirically well documented alterations in neuronal ion channel behavior with the corresponding macroscopic effects. Here we outline one such approach, and show that a range of well documented effects of anaesthetics on the electroencephalogram (EEG) may be putatively accounted for. In particular we parameterize, on the basis of detailed empirical data, the effects of halogenated volatile ethers (a clinically widely used class of general anaesthetic agent). The resulting model is able to provisionally account for a range of anaesthetically induced EEG phenomena that include EEG slowing, biphasic changes in EEG power, and the dose dependent appearance of anomalous ictal activity, as well as providing a basis for novel approaches to monitoring brain function in both health and disease.
Resumo:
In January 2008, central and southern China experienced persistent low temperatures, freezing rain, and snow. The large-scale conditions associated with the occurrence and development of these snowstorms are examined in order to identify the key synoptic controls leading to this event. Three main factors are identified: 1) the persistent blocking high over Siberia, which remained quasi-stationary around 65°E for 3 weeks, led to advection of dry and cold Siberian air down to central and southern China; 2) a strong persistent southwesterly flow associated with the western Pacific subtropical high led to enhanced moisture advection from the Bay of Bengal into central and southern China; and 3) the deep inversion layer in the lower troposphere associated with the extended snow cover over most of central and southern China. The combination of these three factors is likely responsible for the unusual severity of the event, and hence a long return period
Resumo:
The large scale urban consumption of energy (LUCY) model simulates all components of anthropogenic heat flux (QF) from the global to individual city scale at 2.5 × 2.5 arc-minute resolution. This includes a database of different working patterns and public holidays, vehicle use and energy consumption in each country. The databases can be edited to include specific diurnal and seasonal vehicle and energy consumption patterns, local holidays and flows of people within a city. If better information about individual cities is available within this (open-source) database, then the accuracy of this model can only improve, to provide the community data from global-scale climate modelling or the individual city scale in the future. The results show that QF varied widely through the year, through the day, between countries and urban areas. An assessment of the heat emissions estimated revealed that they are reasonably close to those produced by a global model and a number of small-scale city models, so results from LUCY can be used with a degree of confidence. From LUCY, the global mean urban QF has a diurnal range of 0.7–3.6 W m−2, and is greater on weekdays than weekends. The heat release from building is the largest contributor (89–96%), to heat emissions globally. Differences between months are greatest in the middle of the day (up to 1 W m−2 at 1 pm). December to February, the coldest months in the Northern Hemisphere, have the highest heat emissions. July and August are at the higher end. The least QF is emitted in May. The highest individual grid cell heat fluxes in urban areas were located in New York (577), Paris (261.5), Tokyo (178), San Francisco (173.6), Vancouver (119) and London (106.7). Copyright © 2010 Royal Meteorological Society
Resumo:
Diurnal warming events between 5 and 7 K, spatially coherent over large areas (∼1000 km), are observed in independent satellite measurements of ocean surface temperature. The majority of the large events occurred in the extra-tropics. Given sufficient heating (from solar radiation), the location and magnitude of these events appears to be primarily determined by large-scale wind patterns. The amplitude of the measured diurnal heating scales inversely with the spatial resolution of the different sensors used in this study. These results indicate that predictions of peak diurnal warming using wind speeds with a 25 km spatial resolution available from satellite sensors and those with 50–100 km resolution from Numerical Weather Prediction models may have underestimated warming. Thus, the use of these winds in modeling diurnal effects will be limited in accuracy by both the temporal and spatial resolution of the wind fields.
Resumo:
Before the advent of genome-wide association studies (GWASs), hundreds of candidate genes for obesity-susceptibility had been identified through a variety of approaches. We examined whether those obesity candidate genes are enriched for associations with body mass index (BMI) compared with non-candidate genes by using data from a large-scale GWAS. A thorough literature search identified 547 candidate genes for obesity-susceptibility based on evidence from animal studies, Mendelian syndromes, linkage studies, genetic association studies and expression studies. Genomic regions were defined to include the genes ±10 kb of flanking sequence around candidate and non-candidate genes. We used summary statistics publicly available from the discovery stage of the genome-wide meta-analysis for BMI performed by the genetic investigation of anthropometric traits consortium in 123 564 individuals. Hypergeometric, rank tail-strength and gene-set enrichment analysis tests were used to test for the enrichment of association in candidate compared with non-candidate genes. The hypergeometric test of enrichment was not significant at the 5% P-value quantile (P = 0.35), but was nominally significant at the 25% quantile (P = 0.015). The rank tail-strength and gene-set enrichment tests were nominally significant for the full set of genes and borderline significant for the subset without SNPs at P < 10(-7). Taken together, the observed evidence for enrichment suggests that the candidate gene approach retains some value. However, the degree of enrichment is small despite the extensive number of candidate genes and the large sample size. Studies that focus on candidate genes have only slightly increased chances of detecting associations, and are likely to miss many true effects in non-candidate genes, at least for obesity-related traits.
Resumo:
So-called ‘radical’ and ‘critical’ pedagogy seems to be everywhere these days on the landscapes of geographical teaching praxis and theory. Part of the remit of radical/critical pedagogy involves a de-centring of the traditional ‘banking’ method of pedagogical praxis. Yet, how do we challenge this ‘banking’ model of knowledge transmission in both a large-class setting and around the topic of commodity geographies where the banking model of information transfer still holds sway? This paper presents a theoretically and pedagogically driven argument, as well as a series of practical teaching ‘techniques’ and tools—mind-mapping and group work—designed to promote ‘deep learning’ and a progressive political potential in a first-year large-scale geography course centred around lectures on the Geographies of Consumption and Material Culture. Here students are not only asked to place themselves within and without the academic materials and other media but are urged to make intimate connections between themselves and their own consumptive acts and the commodity networks in which they are enmeshed. Thus, perhaps pedagogy needs to be emplaced firmly within the realms of research practice rather than as simply the transference of research findings.
Resumo:
Using 1D Vlasov drift-kinetic computer simulations, it is shown that electron trapping in long period standing shear Alfven waves (SAWs) provides an efficient energy sink for wave energy that is much more effective than Landau damping. It is also suggested that the plasma environment of low altitude auroral-zone geomagnetic field lines is more suited to electron acceleration by inertial or kinetic scale Alfven waves. This is due to the self-consistent response of the electron distribution function to SAWs, which must accommodate the low altitude large-scale current system in standing waves. We characterize these effects in terms of the relative magnitude of the wave phase and electron thermal velocities. While particle trapping is shown to be significant across a wide range of plasma temperatures and wave frequencies, we find that electron beam formation in long period waves is more effective in relatively cold plasma.