55 resultados para Large detector-systems performance

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Greater attention has been focused on the use of CDMA for future cellular mobile communications. CA near-far resistant detector for asynchronous code-division multiple-access (CDMA) systems operating in additive white Gaussian noise (AWGN) channels is presented. The multiuser interference caused by K users transmitting simultaneously, each with a specific signature sequence, is completely removed at the receiver. The complexity of this detector grows only linearly with the number of users, as compared to the optimum multiuser detector which requires exponential complexity in the number of users. A modified algorithm based on time diversity is described. It performs detection on a bit-by-bit basis and overcomes the complexity of using a sequence detector. The performance of this detector is shown to be superior to that of the conventional receiver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For the very large nonlinear dynamical systems that arise in a wide range of physical, biological and environmental problems, the data needed to initialize a numerical forecasting model are seldom available. To generate accurate estimates of the expected states of the system, both current and future, the technique of ‘data assimilation’ is used to combine the numerical model predictions with observations of the system measured over time. Assimilation of data is an inverse problem that for very large-scale systems is generally ill-posed. In four-dimensional variational assimilation schemes, the dynamical model equations provide constraints that act to spread information into data sparse regions, enabling the state of the system to be reconstructed accurately. The mechanism for this is not well understood. Singular value decomposition techniques are applied here to the observability matrix of the system in order to analyse the critical features in this process. Simplified models are used to demonstrate how information is propagated from observed regions into unobserved areas. The impact of the size of the observational noise and the temporal position of the observations is examined. The best signal-to-noise ratio needed to extract the most information from the observations is estimated using Tikhonov regularization theory. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper concerns the innovative use of a blend of systems thinking ideas in the ‘Munro Review of Child Protection’, a high-profile examination of child protection activities in England, conducted for the Department for Education. We go ‘behind the scenes’ to describe the OR methodologies and processes employed. The circumstances that led to the Review are outlined. Three specific contributions that systems thinking made to the Review are then described. First, the systems-based analysis and visualisation of how a ‘compliance culture’ had grown up. Second the creation of a large, complex systems map of current operations and the effects of past policies on them. Third, how the map gave shape to the range of issues the Review addressed and acted as an organising framework for the systemically coherent set of recommendations made. The paper closes with an outline of the main implementation steps taken so far to create a child protection system with the critically reflective properties of a learning organisation, and methodological reflections on the benefits of systems thinking to support organisational analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have integrated information on topography, geology and geomorphology with the results of targeted fieldwork in order to develop a chronology for the development of Lake Megafazzan, a giant lake that has periodically existed in the Fazzan Basin since the late Miocene. The development of the basin can be best understood by considering the main geological and geomorphological events that occurred thought Libya during this period and thus an overview of the palaeohydrology of all Libya is also presented. The origin of the Fazzan Basin appears to lie in the Late Miocene. At this time Libya was dominated by two large rivers systems that flowed into the Mediterranean Sea, the Sahabi River draining central and eastern Libya and the Wadi Nashu River draining much of western Libya. As the Miocene progressed the region become increasingly affected by volcanic activity on its northern and eastern margin that appears to have blocked the River Nashu in Late Miocene or early Messinian times forming a sizeable closed basin in the Fazzan within which proto-Lake Megafazzan would have developed during humid periods. The fall in base level associated with the Messinian desiccation of the Mediterranean Sea promoted down-cutting and extension of river systems throughout much of Libya. To the south of the proto Fazzan Basin the Sahabi River tributary know as Wadi Barjuj appears to have expanded its headwaters westwards. The channel now terminates at Al Haruj al Aswad. We interpret this as a suggestion that Wadi Barjuj was blocked by the progressive development of Al Haruj al Aswad. K/Ar dating of lava flows suggests that this occurred between 4 and 2 Ma. This event would have increased the size of the closed basin in the Fazzan by about half, producing a catchment close to its current size (-350,000 km(2)). The Fazzan Basin contains a wealth of Pleistocene to recent palaeolake sediment outcrops and shorelines. Dating of these features demonstrates evidence of lacustrine conditions during numerous interglacials spanning a period greater than 420 ka. The middle to late Pleistocene interglacials were humid enough to produce a giant lake of about 135,000 km(2) that we have called Lake Megafazzan. Later lake phases were smaller, the interglacials less humid, developing lakes of a few thousand square kilometres. In parallel with these palaeohydrological developments in the Fazzan Basin, change was occurring in other parts of Libya. The Lower Pliocene sea level rise caused sediments to infill much of the Messinian channel system. As this was occurring, subsidence in the Al Kufrah Basin caused expansion of the Al Kufrah River system at the expense of the River Sahabi. By the Pleistocene, the Al Kufrah River dominated the palaeohydrology of eastern Libya and had developed a very large inland delta in its northern reaches that exhibited a complex distributary channel network which at times fed substantial lakes in the Sirt Basin. At this time Libya was a veritable lake district during humid periods with about 10% of the country underwater. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban land surface schemes have been developed to model the distinct features of the urban surface and the associated energy exchange processes. These models have been developed for a range of purposes and make different assumptions related to the inclusion and representation of the relevant processes. Here, the first results of Phase 2 from an international comparison project to evaluate 32 urban land surface schemes are presented. This is the first large-scale systematic evaluation of these models. In four stages, participants were given increasingly detailed information about an urban site for which urban fluxes were directly observed. At each stage, each group returned their models' calculated surface energy balance fluxes. Wide variations are evident in the performance of the models for individual fluxes. No individual model performs best for all fluxes. Providing additional information about the surface generally results in better performance. However, there is clear evidence that poor choice of parameter values can cause a large drop in performance for models that otherwise perform well. As many models do not perform well across all fluxes, there is need for caution in their application, and users should be aware of the implications for applications and decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Project includes: a large scale live performance and resulting performance video, at Curtain Razors, Regina Queen’s Square, Regina, 2008 Live Performance, 45 mins, incl. 1 actor, 23 extras, 2 live cameras, live video and sound mixing, stage set, video projection. Video 45 mins Video Trailer 7 mins The Extras is a video performance referencing the form of a large live film shoot. The Extras contextualises contemporary Westerns genres within an experimental live tableau. The live performance and resulting 45 mins video make reference 19th century Western Author German Karl May, the tradition of Eastern European Western, (Red Western), Uranium exploitation and entrepreneurial cultures in the Canadian Prairies. Funded by the Canada Council for the Arts, Saskatchewan Arts Board and Curtain Razors, the Extras Regina was staged and performed at Central Plaza in Regina, with a crew of 23 extras, 2 live cameras, live video and sound mixing ad video projection. It involved research in Saskatchewan film and photographic archives. The performance was edited live and mixed with video material which was shot on location, with a further group of extras at historical historical ‘Western’ locations, including Fort Qu' Appelle, 
Castle Butte and Big Muddy. It also involved a collaboration with a local theatre production company, which enacted a dramatised historical incident.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

One major assumption in all orthogonal space-time block coding (O-STBC) schemes is that the channel remains static over the length of the code word. However, time-selective fading channels do exist, and in such case conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. As a sequel to the authors' previous papers on this subject, this paper aims to eliminate the error floor of the H(i)-coded O-STBC system (i = 3 and 4) by employing the techniques of: 1) zero forcing (ZF) and 2) parallel interference cancellation (PIC). It is. shown that for an H(i)-coded system the PIC is a much better choice than the ZF in terms of both performance and computational complexity. Compared with the, conventional H(i) detector, the PIC detector incurs a moderately higher computational complexity, but this can well be justified by the enormous improvement.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

All the orthogonal space-time block coding (O-STBC) schemes are based on the following assumption: the channel remains static over the entire length of the codeword. However, time selective fading channels do exist, and in many cases the conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. This paper addresses such an issue by introducing a parallel interference cancellation (PIC) based detector for the Gi coded systems (i=3 and 4).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Reanalysis data obtained from data assimilation are increasingly used for diagnostic studies of the general circulation of the atmosphere, for the validation of modelling experiments and for estimating energy and water fluxes between the Earth surface and the atmosphere. Because fluxes are not specifically observed, but determined by the data assimilation system, they are not only influenced by the utilized observations but also by model physics and dynamics and by the assimilation method. In order to better understand the relative importance of humidity observations for the determination of the hydrological cycle, in this paper we describe an assimilation experiment using the ERA40 reanalysis system where all humidity data have been excluded from the observational data base. The surprising result is that the model, driven by the time evolution of wind, temperature and surface pressure, is able to almost completely reconstitute the large-scale hydrological cycle of the control assimilation without the use of any humidity data. In addition, analysis of the individual weather systems in the extratropics and tropics using an objective feature tracking analysis indicates that the humidity data have very little impact on these systems. We include a discussion of these results and possible consequences for the way moisture information is assimilated, as well as the potential consequences for the design of observing systems for climate monitoring. It is further suggested, with support from a simple assimilation study with another model, that model physics and dynamics play a decisive role for the hydrological cycle, stressing the need to better understand these aspects of model parametrization. .

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper is an initial work towards developing an e-Government benchmarking model that is user-centric. To achieve the goal then, public service delivery is discussed first including the transition to online public service delivery and the need for providing public services using electronic media. Two major e-Government benchmarking methods are critically discussed and the need to develop a standardized benchmarking model that is user-centric is presented. To properly articulate user requirements in service provision, an organizational semiotic method is suggested.