904 resultados para 291605 Processor Architectures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely recognized that small businesses with less than 50 employees make significant contributions to the prosperity of local, regional, and national economies. They are a major source of job creation and a driving force of economic growth for developed countries like the USA (Headd, 2005; SBA, 2005), the UK (Dixon, Thompson, & McAllister, 2002; SBS, 2005), Europe (European Commission, 2003), and developing countries such as China (Bo, 2005). The economic potential is further strengthened when firms collaborate with each other; for example, formation of a supply chain, strategic alliances, or sharing of information and resources (Horvath, 2001; O’Donnell, Cilmore, Cummins, & Carson, 2001; MacGregor, 2004; Todeva & Knoke, 2005). Owing to heterogeneous aspects of small businesses, such as firm size and business sector, a single e-business solution is unlikely to be suitable for all firms (Dixon et al., 2002; Taylor & Murphy, 2004a); however, collaboration requires individual firms to adopt standardized, simplified solutions based on open architectures and data design (Horvath, 2001). The purpose of this article is to propose a conceptual e-business framework and a generic e-catalogue, which enables small businesses to collaborate through the creation of an e-marketplace. To assist with the task, analysis of data from 6,000 small businesses situated within a locality of Greater Manchester, England within the context of an e-business portal is incorporated within this study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A parallel processor architecture based on a communicating sequential processor chip, the transputer, is described. The architecture is easily linearly extensible to enable separate functions to be included in the controller. To demonstrate the power of the resulting controller some experimental results are presented comparing PID and full inverse dynamics on the first three joints of a Puma 560 robot. Also examined are some of the sample rate issues raised by the asynchronous updating of inertial parameters, and the need for full inverse dynamics at every sample interval is questioned.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Equilibrium phase diagrams are calculated for a selection of two-component block copolymer architectures using self-consistent field theory (SCFT). The topology of the phase diagrams is relatively unaffected by differences in architecture, but the phase boundaries shift significantly in composition. The shifts are consistent with the decomposition of architectures into constituent units as proposed by Gido and coworkers, but there are significant quantitative deviations from this principle in the intermediate-segregation regime. Although the complex phase windows continue to be dominated by the gyroid (G) phase, the regions of the newly discovered Fddd (O^70) phase become appreciable for certain architectures and the perforated-lamellar (PL) phase becomes stable when the complex phase windows shift towards high compositional asymmetry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Both the (5,3) counter and (2,2,3) counter multiplication techniques are investigated for the efficiency of their operation speed and the viability of the architectures when implemented in a fast bipolar ECL technology. The implementation of the counters in series-gated ECL and threshold logic are contrasted for speed, noise immunity and complexity, and are critically compared with the fastest practical design of a full-adder. A novel circuit technique to overcome the problems of needing high fan-in input weights in threshold circuits through the use of negative weighted inputs is presented. The authors conclude that a (2,2,3) counter based array multiplier implemented in series-gated ECL should enable a significant increase in speed over conventional full adder based array multipliers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simulating spiking neural networks is of great interest to scientists wanting to model the functioning of the brain. However, large-scale models are expensive to simulate due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present NeMo, a platform for such simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs). NeMo makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Our GPU kernel can deliver up to 400 million spikes per second. This corresponds to a real-time simulation of around 40 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The surfactant-like peptide (Ala)6(Arg) is found to self-assemble into 3 nm-thick sheets in aqueous solution. Scanning transmission electron microscopy measurements of mass per unit area indicate a layer structure based on antiparallel dimers. At higher concentration the sheets wrap into unprecedented ultrathin helical ribbon and nanotube architectures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Normal Quantile Transform (NQT) has been used in many hydrological and meteorological applications in order to make the Cumulated Distribution Function (CDF) of the observed, simulated and forecast river discharge, water level or precipitation data Gaussian. It is also the heart of the meta-Gaussian model for assessing the total predictive uncertainty of the Hydrological Uncertainty Processor (HUP) developed by Krzysztofowicz. In the field of geo-statistics this transformation is better known as the Normal-Score Transform. In this paper some possible problems caused by small sample sizes when applying the NQT in flood forecasting systems will be discussed and a novel way to solve the problem will be outlined by combining extreme value analysis and non-parametric regression methods. The method will be illustrated by examples of hydrological stream-flow forecasts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although the potential to adapt to warmer climate is constrained by genetic trade-offs, our understanding of how selection and mutation shape genetic (co)variances in thermal reaction norms is poor. Using 71 isofemale lines of the fly Sepsis punctum, originating from northern, central, and southern European climates, we tested for divergence in juvenile development rate across latitude at five experimental temperatures. To investigate effects of evolutionary history in different climates on standing genetic variation in reaction norms, we further compared genetic (co)variances between regions. Flies were reared on either high or low food resources to explore the role of energy acquisition in determining genetic trade-offs between different temperatures. Although the latter had only weak effects on the strength and sign of genetic correlations, genetic architecture differed significantly between climatic regions, implying that evolution of reaction norms proceeds via different trajectories at high latitude versus low latitude in this system. Accordingly, regional genetic architecture was correlated to region-specific differentiation. Moreover, hot development temperatures were associated with low genetic variance and stronger genetic correlations compared to cooler temperatures. We discuss the evolutionary potential of thermal reaction norms in light of their underlying genetic architectures, evolutionary histories, and the materialization of trade-offs in natural environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many operational weather forecasting centres use semi-implicit time-stepping schemes because of their good efficiency. However, as computers become ever more parallel, horizontally explicit solutions of the equations of atmospheric motion might become an attractive alternative due to the additional inter-processor communication of implicit methods. Implicit and explicit (IMEX) time-stepping schemes have long been combined in models of the atmosphere using semi-implicit, split-explicit or HEVI splitting. However, most studies of the accuracy and stability of IMEX schemes have been limited to the parabolic case of advection–diffusion equations. We demonstrate how a number of Runge–Kutta IMEX schemes can be used to solve hyperbolic wave equations either semi-implicitly or HEVI. A new form of HEVI splitting is proposed, UfPreb, which dramatically improves accuracy and stability of simulations of gravity waves in stratified flow. As a consequence it is found that there are HEVI schemes that do not lose accuracy in comparison to semi-implicit ones. The stability limits of a number of variations of trapezoidal implicit and some Runge–Kutta IMEX schemes are found and the schemes are tested on two vertical slice cases using the compressible Boussinesq equations split into various combinations of implicit and explicit terms. Some of the Runge–Kutta schemes are found to be beneficial over trapezoidal, especially since they damp high frequencies without dropping to first-order accuracy. We test schemes that are not formally accurate for stiff systems but in stiff limits (nearly incompressible) and find that they can perform well. The scheme ARK2(2,3,2) performs the best in the tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the importance of the corporate brand (e.g. P&G, Nestlé, Unilever) has grown significantly and companies increasingly strive to strengthen their corporate brand. One way to strengthen the corporate brand is portfolio advertisement, in which the corporate brand is presented alongside with several product brands of its portfolio (e.g. VW with its product brands Touareg, Touran, Golf and Polo). The aim of portfolio advertising is to generate a positive image spill-over effect from the product brands onto the corporate brand in order to enhance the consumers’ perceived competence of the corporate brand. In four experimental settings Christian Boris Brunner demonstrates the great potential of portfolio advertising and highlights the risks associated with portfolio advertising in practice. In a first experiment, he compares portfolio advertising with single brand advertisements. Moreover, in case of portfolio advertising he manipulates the fit between the product brands, because the consumer has to establish a logical coherence between the individual brands. However, asconsumers have limited capacity for processing information, special attention should be paid to the number of product brands and to the processing depth of the consumer during confrontation with portfolio advertising. These key factors are taken into consideration in a second extensive experiment involving fictitious corporate and product brands. The effects of portfolio advertising on a product brand are also examined. Furthermore, the strength of product brands, i.e. brand knowledge as well as brand image and consumer’s knowledge of the brands, must be taken into consideration. In a third experiment, both the brand strength of real product brands as well as the fit between product brands are manipulated. Portfolio advertising could also have a positive image spill-over effect when companies introduce a new product brand under the umbrella of the corporate brand while communicating all product brands together. Based on considerations, in a fourth experiment, Christian Boris Brunner shows that portfolio advertising could also have a positive image spill-over effect on a new (unknown) product brand. Concluding his work, Christian Boris Brunner provides implications for future research concerning portfolio advertising as well as the management of a corporate brand in complex brand architectures. Concerning practical implications, these four experiments underline a high relevance to marketing and brand managers, who could increase corporate and product brands’ potential by means of portfolio advertising.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complexity of current and emerging high performance architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven performance modelling approach is outlined that is appro- priate for modern multicore architectures. The approach is demonstrated by constructing a model of a simple shallow water code on a Cray XE6 system, from application-specific benchmarks that illustrate precisely how architectural char- acteristics impact performance. The model is found to recre- ate observed scaling behaviour up to 16K cores, and used to predict optimal rank-core affinity strategies, exemplifying the type of problem such a model can be used for.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The synthesis and characterisation of novel covalent organic-inorganic architectures containing organically-functionalised supertetrahedra is described. The structures of these unique materials consist of one-dimensional zigzag chains or of honeycomb-type layers, in which gallium-sulfide supertetrahedral clusters and dipyridyl ligands alternate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.