930 resultados para Large detector systems for particle and astroparticle physics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi-threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full-scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread-safe Java messaging system—MPJ Express. The first application is the Gadget-2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite-domain time-difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mutual influence of surface geometry (e.g. lattice parameters, morphology) and electronic structure is discussed for Cu-Ni bimetallic (111) surfaces. It is found that on flat surfaces the electronic d-states of the adlayer experience very little influence from the substrate electronic structure which is due to their large separation in binding energies and the close match of Cu and Ni lattice constants. Using carbon monoxide and benzene as probe molecules, it is found that in most cases the reactivity of Cu or Ni adlayers is very similar to the corresponding (111) single crystal surfaces. Exceptions are the adsorption of CO on submonolayers of Cu on Ni(111) and the dissociation of benzene on Ni/Cu(111) which is very different from Ni(111). These differences are related to geometric factors influencing the adsorption on these surfaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increased use of technology is necessary in order for industrial control systems to maintain and monitor industrial, infrastructural, or environmental processes. The need to secure and identify threats to the system is equally critical. Securing Critical Infrastructures and Critical Control Systems: Approaches for Threat Protection provides a full and detailed understanding of the vulnerabilities and security threats that exist within an industrial control system. This collection of research defines and analyzes the technical, procedural, and managerial responses to securing these systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A climatology of cyclones with a focus on their relation to wind storm tracks in the Mediterranean region (MR) is presented. Trends in the frequency of cyclones and wind storms, as well as variations associated with the North Atlantic Oscillation (NAO), the East Atlantic/West Russian (EAWR) and the Scandinavian variability pattern (SCAND) are discussed. The study is based on the ERA40 reanalysis dataset. Wind storm tracks are identified by tracking clusters of adjacent grid boxes characterised by extremely high local wind speeds. The wind track is assigned to a cyclone track independently identified with an objective scheme. Areas with high wind activity – quantified by extreme wind tracks – are typically located south of the Golf of Genoa, south of Cyprus, southeast of Sicily and west of the Iberian Peninsula. About 69% of the wind storms are caused by cyclones located in the Mediterranean region, while the remaining 31% can be attributed to North Atlantic or Northern European cyclones. The North Atlantic Oscillation, the East Atlantic/West Russian pattern and the Scandinavian pattern all influence the amount and spatial distribution of wind inducing cyclones and wind events in the MR. The strongest signals exist for the NAO and the EAWR pattern, which are both associated with an increase in the number of organised strong wind events in the eastern MR during their positive phase. On the other hand, the storm numbers decrease over the western MR for the positive phase of the NAO and over the central MR during the positive phase of the EAWR pattern. The positive phase of the Scandinavian pattern is associated with a decrease in the number of winter wind storms over most of the MR. A third of the trends in the number of wind storms and wind producing cyclones during the winter season of the ERA40 period may be attributed to the variability of the North Atlantic Oscillation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this article is to make an epistemological and theoretical contribution to the nascent field of third language (L3) acquisition and show how examining L3 development can offer a unique view into longstanding debates within L2 acquisition theory. We offer the Phonological Permeability Hypothesis (PPH), which maintains that examining the development of an L3/Ln phonological system and its effects on a previously acquired L2 phonological system can inform contemporary debates regarding the mental constitution of postcritical period adult phonological acquisition. We discuss the predictions and functional significance of the PPH for adult SLA and multilingualism studies, detailing a methodology that examines the effects of acquiring Brazilian Portuguese on the Spanish phonological systems learned before and after the so-called critical period (i.e., comparing simultaneous versus successive adult English-Spanish bilinguals learning Brazilian Portuguese as an L3).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematics in Defence 2011 Abstract. We review transreal arithmetic and present transcomplex arithmetic. These arithmetics have no exceptions. This leads to incremental improvements in computer hardware and software. For example, the range of real numbers, encoded by floating-point bits, is doubled when all of the Not-a-Number(NaN) states, in IEEE 754 arithmetic, are replaced with real numbers. The task of programming such systems is simplified and made safer by discarding the unordered relational operator,leaving only the operators less-than, equal-to, and greater than. The advantages of using a transarithmetic in a computation, or transcomputation as we prefer to call it, may be had by making small changes to compilers and processor designs. However, radical change is possible by exploiting the reliability of transcomputations to make pipelined dataflow machines with a large number of cores. Our initial designs are for a machine with order one million cores. Such a machine can complete the execution of multiple in-line programs each clock tick

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agriculture and food security are key sectors for intervention under climate change. Agricultural production is highly vulnerable even to 2C (low-end) predictions for global mean temperatures in 2100, with major implications for rural poverty and for both rural and urban food security. Agriculture also presents untapped opportunities for mitigation, given the large land area under crops and rangeland, and the additional mitigation potential of aquaculture. This paper presents a summary of current knowledge on options to support farmers, particularly smallholder farmers, in achieving food security through agriculture under climate change. Actions towards adaptation fall into two broad overlapping areas: (1) accelerated adaptation to progressive climate change over decadal time scales, for example integrated packages of technology, agronomy and policy options for farmers and food systems, and (2) better management of agricultural risks associated with increasing climate variability and extreme events, for example improved climate information services and safety nets. Maximization of agriculture’s mitigation potential will require investments in technological innovation and agricultural intensification linked to increased efficiency of inputs, and creation of incentives and monitoring systems that are inclusive of smallholder farmers. Food systems faced with climate change need urgent, broad-based action in spite of uncertainties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Complex Adaptive Systems, Cognitive Agents and Distributed Energy (CASCADE) project is developing a framework based on Agent Based Modelling (ABM). The CASCADE Framework can be used both to gain policy and industry relevant insights into the smart grid concept itself and as a platform to design and test distributed ICT solutions for smart grid based business entities. ABM is used to capture the behaviors of diff erent social, economic and technical actors, which may be defi ned at various levels of abstraction. It is applied to understanding their interactions and can be adapted to include learning processes and emergent patterns. CASCADE models ‘prosumer’ agents (i.e., producers and/or consumers of energy) and ‘aggregator’ agents (e.g., traders of energy in both wholesale and retail markets) at various scales, from large generators and Energy Service Companies down to individual people and devices. The CASCADE Framework is formed of three main subdivisions that link models of electricity supply and demand, the electricity market and power fl ow. It can also model the variability of renewable energy generation caused by the weather, which is an important issue for grid balancing and the profi tability of energy suppliers. The development of CASCADE has already yielded some interesting early fi ndings, demonstrating that it is possible for a mediating agent (aggregator) to achieve stable demandfl attening across groups of domestic households fi tted with smart energy control and communication devices, where direct wholesale price signals had previously been found to produce characteristic complex system instability. In another example, it has demonstrated how large changes in supply mix can be caused even by small changes in demand profi le. Ongoing and planned refi nements to the Framework will support investigation of demand response at various scales, the integration of the power sector with transport and heat sectors, novel technology adoption and diffusion work, evolution of new smart grid business models, and complex power grid engineering and market interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A plasma source, sustained by the application of a floating high voltage (±15 kV) to parallel-plate electrodes at 50 Hz, has been achieved in a helium/air mixture at atmospheric pressure (P = 105 Pa) contained in a zip-locked plastic package placed in the electrode gap. Some of the physical and antimicrobial properties of this apparatus were established with a view to ascertain its performance as a prototype for the disinfection of fresh produce. The current–voltage (I–V) and charge–voltage (Q–V) characteristics of the system were measured as a function of gap distance d, in the range (3 × 103 ≤ Pd ≤ 1.0 × 104 Pa m). The electrical measurements showed this plasma source to exhibit the characteristic behaviour of a dielectric barrier discharge in the filamentary mode and its properties could be accurately interpreted by the two-capacitance in series model. The power consumed by the discharge and the reduced field strength were found to decrease quadratically from 12.0 W to 4.5 W and linearly from 140 Td to 50 Td, respectively, in the range studied. Emission spectra of the discharge were recorded on a relative intensity scale and the dominant spectral features could be assigned to strong vibrational bands in the 2+ and 1− systems of N2 and ${\rm N}_2^+$ , respectively, with other weak signatures from the NO and OH radicals and the N+, He and O atomic species. Absolute spectral intensities were also recorded and interpreted by comparison with the non-equilibrium synthetic spectra generated by the computer code SPECAIR. At an inter-electrode gap of 0.04 m, this comparison yielded typical values for the electron, vibrational and translational (gas) temperatures of (4980 ± 100) K, (2700 ± 200) K and (300 ± 100) K, respectively and an electron density of 1.0 × 1017 m−3. A Boltzmann plot also provided a value of (3200 ± 200 K) for the vibrational temperature. The antimicrobial efficacy was assessed by studying the resistance of both Escherichia coli K12 its isogenic mutants in soxR, soxS, oxyR, rpoS and dnaK selected to identify possible cellular responses and targets related with 5 min exposure to the active gas in proximity of, but not directly in, the path of the discharge filaments. Both the parent strain and mutants populations were significantly reduced by more than 1.5 log cycles in these conditions, showing the potential of the system. Post-treatment storage studies showed that some transcription regulators and specific genes related to oxidative stress play an important role in the E. coli repair mechanism and that plasma exposure affects specific cell regulator systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding how climate change can affect crop-pollinator systems helps predict potential geographical mismatches between a crop and its pollinators, and therefore identify areas vulnerable to loss of pollination services. We examined the distribution of orchard species (apples, pears, plums and other top fruits) and their pollinators in Great Britain, for present and future climatic conditions projected for 2050 under the SRES A1B Emissions Scenario. We used a relative index of pollinator availability as a proxy for pollination service. At present there is a large spatial overlap between orchards and their pollinators, but predictions for 2050 revealed that the most suitable areas for orchards corresponded to low pollinator availability. However, we found that pollinator availability may persist in areas currently used for fruit production, but which are predicted to provide sub-optimal environmental suitability for orchard species in the future. Our results may be used to identify mitigation options to safeguard orchard production against the risk of pollination failure in Great Britain over the next 50 years; for instance choosing fruit tree varieties that are adapted to future climatic conditions, or boosting wild pollinators through improving landscape resources. Our approach can be readily applied to other regions and crop systems, and expanded to include different climatic scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.