982 resultados para robust atomic distributed amorphous
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
An incidence matrix analysis is used to model a three-dimensional network consisting of resistive and capacitive elements distributed across several interconnected layers. A systematic methodology for deriving a descriptor representation of the network with random allocation of the resistors and capacitors is proposed. Using a transformation of the descriptor representation into standard state-space form, amplitude and phase admittance responses of three-dimensional random RC networks are obtained. Such networks display an emergent behavior with a characteristic Jonscher-like response over a wide range of frequencies. A model approximation study of these networks is performed to infer the admittance response using integral and fractional order models. It was found that a fractional order model with only seven parameters can accurately describe the responses of networks composed of more than 70 nodes and 200 branches with 100 resistors and 100 capacitors. The proposed analysis can be used to model charge migration in amorphous materials, which may be associated to specific macroscopic or microscopic scale fractal geometrical structures in composites displaying a viscoelastic electromechanical response, as well as to model the collective responses of processes governed by random events described using statistical mechanics.
Resumo:
Climate models consistently predict a strengthened Brewer–Dobson circulation in response to greenhouse gas (GHG)-induced climate change. Although the predicted circulation changes are clearly the result of changes in stratospheric wave drag, the mechanism behind the wave-drag changes remains unclear. Here, simulations from a chemistry–climate model are analyzed to show that the changes in resolved wave drag are largely explainable in terms of a simple and robust dynamical mechanism, namely changes in the location of critical layers within the subtropical lower stratosphere, which are known from observations to control the spatial distribution of Rossby wave breaking. In particular, the strengthening of the upper flanks of the subtropical jets that is robustly expected from GHG-induced tropospheric warming pushes the critical layers (and the associated regions of wave drag) upward, allowing more wave activity to penetrate into the subtropical lower stratosphere. Because the subtropics represent the critical region for wave driving of the Brewer–Dobson circulation, the circulation is thereby strengthened. Transient planetary-scale waves and synoptic-scale waves generated by baroclinic instability are both found to play a crucial role in this process. Changes in stationary planetary wave drag are not so important because they largely occur away from subtropical latitudes.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
Affymetrix GeneChip (R) arrays are used widely to study transcriptional changes in response to developmental and environmental stimuli. GeneChip (R) arrays comprise multiple 25-mer oligonucleotide probes per gene and retain certain advantages over direct sequencing. For plants, there are several public GeneChip (R) arrays whose probes are localised primarily in 39 exons. Plant whole-transcript (WT) GeneChip (R) arrays are not yet publicly available, although WT resolution is needed to study complex crop genomes such as Brassica, which are typified by segmental duplications containing paralogous genes and/or allopolyploidy. Available sequence data were sampled from the Brassica A and C genomes, and 142,997 gene models identified. The assembled gene models were then used to establish a comprehensive public WT exon array for transcriptomics studies. The Affymetrix GeneChip (R) Brassica Exon 1.0 ST Array is a 5 mu M feature size array, containing 2.4 million 25-base oligonucleotide probes representing 135,201 gene models, with 15 probes per gene distributed among exons. Discrimination of the gene models was based on an E-value cut-off of 1E(-5), with <= 98 sequence identity. The 135 k Brassica Exon Array was validated by quantifying transcriptome differences between leaf and root tissue from a reference Brassica rapa line (R-o-18), and categorisation by Gene Ontologies (GO) based on gene orthology with Arabidopsis thaliana. Technical validation involved comparison of the exon array with a 60-mer array platform using the same starting RNA samples. The 135 k Brassica Exon Array is a robust platform. All data relating to the array design and probe identities are available in the public domain and are curated within the BrassEnsembl genome viewer at http://www.brassica.info/BrassEnsembl/index.html.
Resumo:
In this communication, we describe a new method which has enabled the first patterning of human neurons (derived from the human teratocarcinoma cell line (hNT)) on parylene-C/silicon dioxide substrates. We reveal the details of the nanofabrication processes, cell differentiation and culturing protocols necessary to successfully pattern hNT neurons which are each key aspects of this new method. The benefits in patterning human neurons on silicon chip using an accessible cell line and robust patterning technology are of widespread value. Thus, using a combined technology such as this will facilitate the detailed study of the pathological human brain at both the single cell and network level.
Resumo:
In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.
Resumo:
The electronic structure and oxidation state of atomic Au adsorbed on a perfect CeO2(111) surface have been investigated in detail by means of periodic density functional theory-based calculations, using the LDA+U and GGA+U potentials for a broad range of U values, complemented with calculations employing the HSE06 hybrid functional. In addition, the effects of the lattice parameter a0 and of the starting point for the geometry optimization have also been analyzed. From the present results we suggest that the oxidation state of single Au atoms on CeO2(111) predicted by LDA+U, GGA+U, and HSE06 density functional calculations is not conclusive and that the final picture strongly depends on the method chosen and on the construction of the surface model. In some cases we have been able to locate two well-defined states which are close in energy but with very different electronic structure and local geometries, one with Au fully oxidized and one with neutral Au. The energy difference between the two states is typically within the limits of the accuracy of the present exchange-correlation potentials, and therefore, a clear lowest-energy state cannot be identified. These results suggest the possibility of a dynamic distribution of Au0 and Au+ atomic species at the regular sites of the CeO2(111) surface.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
The aetiology of breast cancer is multifactorial. While there are known genetic predispositions to the disease it is probable that environmental factors are also involved. Recent research has demonstrated a regionally specific distribution of aluminium in breast tissue mastectomies while other work has suggested mechanisms whereby breast tissue aluminium might contribute towards the aetiology of breast cancer. We have looked to develop microwave digestion combined with a new form of graphite furnace atomic absorption spectrometry as a precise, accurate and reproducible method for the measurement of aluminium in breast tissue biopsies. We have used this method to test the thesis that there is a regional distribution of aluminium across the breast in women with breast cancer. Microwave digestion of whole breast tissue samples resulted in clear homogenous digests perfectly suitable for the determination of aluminium by graphite furnace atomic absorption spectrometry. The instrument detection limit for the method was 0.48 μg/L. Method blanks were used to estimate background levels of contamination of 14.80 μg/L. The mean concentration of aluminium across all tissues was 0.39 μg Al/g tissue dry wt. There were no statistically significant regionally specific differences in the content of aluminium. We have developed a robust method for the precise and accurate measurement of aluminium in human breast tissue. There are very few such data currently available in the scientific literature and they will add substantially to our understanding of any putative role of aluminium in breast cancer. While we did not observe any statistically significant differences in aluminium content across the breast it has to be emphasised that herein we measured whole breast tissue and not defatted tissue where such a distribution was previously noted. We are very confident that the method developed herein could now be used to provide accurate and reproducible data on the aluminium content in defatted tissue and oil from such tissues and thereby contribute towards our knowledge on aluminium and any role in breast cancer.
Resumo:
A new model has been developed for assessing multiple sources of nitrogen in catchments. The model (INCA) is process based and uses reaction kinetic equations to simulate the principal mechanisms operating. The model allows for plant uptake, surface and sub-surface pathways and can simulate up to six land uses simultaneously. The model can be applied to catchment as a semi-distributed simulation and has an inbuilt multi-reach structure for river systems. Sources of nitrogen can be from atmospheric deposition, from the terrestrial environment (e.g. agriculture, leakage from forest systems etc.), from urban areas or from direct discharges via sewage or intensive farm units. The model is a daily simulation model and can provide information in the form of time series at key sites, or as profiles down river systems or as statistical distributions. The process model is described and in a companion paper the model is applied to the River Tywi catchment in South Wales and the Great Ouse in Bedfordshire.
Resumo:
In this paper we introduce a new testing procedure for evaluating the rationality of fixed-event forecasts based on a pseudo-maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed-event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian-based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.
Resumo:
This paper presents a neuroscience inspired information theoretic approach to motion segmentation. Robust motion segmentation represents a fundamental first stage in many surveillance tasks. As an alternative to widely adopted individual segmentation approaches, which are challenged in different ways by imagery exhibiting a wide range of environmental variation and irrelevant motion, this paper presents a new biologically-inspired approach which computes the multivariate mutual information between multiple complementary motion segmentation outputs. Performance evaluation across a range of datasets and against competing segmentation methods demonstrates robust performance.