908 resultados para Large detector-systems performance
Resumo:
Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.
Resumo:
La compréhension de la structure d’un logiciel est une première étape importante dans la résolution de tâches d’analyse et de maintenance sur celui-ci. En plus des liens définis par la hiérarchie, il existe un autre type de liens entre les éléments du logiciel que nous appelons liens d’adjacence. Une compréhension complète d’un logiciel doit donc tenir compte de tous ces types de liens. Les outils de visualisation sont en général efficaces pour aider un développeur dans sa compréhension d’un logiciel en lui présentant l’information sous forme claire et concise. Cependant, la visualisation simultanée des liens hiérarchiques et d’adjacence peut donner lieu à beaucoup d’encombrement visuel, rendant ainsi ces visualisations peu efficaces pour fournir de l’information utile sur ces liens. Nous proposons dans ce mémoire un outil de visualisation 3D qui permet de représenter à la fois la structure hiérarchique d’un logiciel et les liens d’adjacence existant entre ses éléments. Notre outil utilise trois types de placements différents pour représenter la hiérarchie. Chacun peut supporter l’affichage des liens d’adjacence de manière efficace. Pour représenter les liens d’adjacence, nous proposons une version 3D de la méthode des Hierarchical Edge Bundles. Nous utilisons également un algorithme métaheuristique pour améliorer le placement afin de réduire davantage l’encombrement visuel dans les liens d’adjacence. D’autre part, notre outil offre un ensemble de possibilités d’interaction permettant à un usager de naviguer à travers l’information offerte par notre visualisation. Nos contributions ont été évaluées avec succès sur des systèmes logiciels de grande taille.
Resumo:
Dans cette thèse, nous présentons quelques analyses théoriques récentes ainsi que des observations expérimentales de l’effet tunnel quantique macroscopique et des tran- sitions de phase classique-quantique dans le taux d’échappement des systèmes de spins élevés. Nous considérons les systèmes de spin biaxial et ferromagnétiques. Grâce à l’approche de l’intégral de chemin utilisant les états cohérents de spin exprimés dans le système de coordonnées, nous calculons l’interférence des phases quantiques et leur distribution énergétique. Nous présentons une exposition claire de l’effet tunnel dans les systèmes antiferromagnétiques en présence d’un couplage d’échange dimère et d’une anisotropie le long de l’axe de magnétisation aisé. Nous obtenons l’énergie et la fonc- tion d’onde de l’état fondamentale ainsi que le premier état excité pour les systèmes de spins entiers et demi-entiers impairs. Nos résultats sont confirmés par un calcul utilisant la théorie des perturbations à grand ordre et avec la méthode de l’intégral de chemin qui est indépendant du système de coordonnées. Nous présentons aussi une explica- tion claire de la méthode du potentiel effectif, qui nous laisse faire une application d’un système de spin quantique vers un problème de mécanique quantique d’une particule. Nous utilisons cette méthode pour analyser nos modèles, mais avec la contrainte d’un champ magnétique externe ajouté. La méthode nous permet de considérer les transitions classiques-quantique dans le taux d’échappement dans ces systèmes. Nous obtenons le diagramme de phases ainsi que les températures critiques du passage entre les deux régimes. Nous étendons notre analyse à une chaine de spins d’Heisenberg antiferro- magnétique avec une anisotropie le long d’un axe pour N sites, prenant des conditions frontière périodiques. Pour N paire, nous montrons que l’état fondamental est non- dégénéré et donné par la superposition des deux états de Néel. Pour N impair, l’état de Néel contient un soliton, et, car la position du soliton est indéterminée, l’état fondamen- tal est N fois dégénéré. Dans la limite perturbative pour l’interaction d’Heisenberg, les fluctuations quantiques lèvent la dégénérescence et les N états se réorganisent dans une bande. Nous montrons qu’à l’ordre 2s, où s est la valeur de chaque spin dans la théorie des perturbations dégénérées, la bande est formée. L’état fondamental est dégénéré pour s entier, mais deux fois dégénéré pour s un demi-entier impair, comme prévu par le théorème de Kramer
Resumo:
Detection of Objects in Video is a highly demanding area of research. The Background Subtraction Algorithms can yield better results in Foreground Object Detection. This work presents a Hybrid CodeBook based Background Subtraction to extract the foreground ROI from the background. Codebooks are used to store compressed information by demanding lesser memory usage and high speedy processing. This Hybrid method which uses Block-Based and Pixel-Based Codebooks provide efficient detection results; the high speed processing capability of block based background subtraction as well as high Precision Rate of pixel based background subtraction are exploited to yield an efficient Background Subtraction System. The Block stage produces a coarse foreground area, which is then refined by the Pixel stage. The system’s performance is evaluated with different block sizes and with different block descriptors like 2D-DCT, FFT etc. The Experimental analysis based on statistical measurements yields precision, recall, similarity and F measure of the hybrid system as 88.74%, 91.09%, 81.66% and 89.90% respectively, and thus proves the efficiency of the novel system.
Resumo:
The activated sludge process - the main biological technology usually applied to wastewater treatment plants (WWTP) - directly depends on live beings (microorganisms), and therefore on unforeseen changes produced by them. It could be possible to get a good plant operation if the supervisory control system is able to react to the changes and deviations in the system and can take the necessary actions to restore the system’s performance. These decisions are often based both on physical, chemical, microbiological principles (suitable to be modelled by conventional control algorithms) and on some knowledge (suitable to be modelled by knowledge-based systems). But one of the key problems in knowledge-based control systems design is the development of an architecture able to manage efficiently the different elements of the process (integrated architecture), to learn from previous cases (spec@c experimental knowledge) and to acquire the domain knowledge (general expert knowledge). These problems increase when the process belongs to an ill-structured domain and is composed of several complex operational units. Therefore, an integrated and distributed AI architecture seems to be a good choice. This paper proposes an integrated and distributed supervisory multi-level architecture for the supervision of WWTP, that overcomes some of the main troubles of classical control techniques and those of knowledge-based systems applied to real world systems
Resumo:
We have integrated information on topography, geology and geomorphology with the results of targeted fieldwork in order to develop a chronology for the development of Lake Megafazzan, a giant lake that has periodically existed in the Fazzan Basin since the late Miocene. The development of the basin can be best understood by considering the main geological and geomorphological events that occurred thought Libya during this period and thus an overview of the palaeohydrology of all Libya is also presented. The origin of the Fazzan Basin appears to lie in the Late Miocene. At this time Libya was dominated by two large rivers systems that flowed into the Mediterranean Sea, the Sahabi River draining central and eastern Libya and the Wadi Nashu River draining much of western Libya. As the Miocene progressed the region become increasingly affected by volcanic activity on its northern and eastern margin that appears to have blocked the River Nashu in Late Miocene or early Messinian times forming a sizeable closed basin in the Fazzan within which proto-Lake Megafazzan would have developed during humid periods. The fall in base level associated with the Messinian desiccation of the Mediterranean Sea promoted down-cutting and extension of river systems throughout much of Libya. To the south of the proto Fazzan Basin the Sahabi River tributary know as Wadi Barjuj appears to have expanded its headwaters westwards. The channel now terminates at Al Haruj al Aswad. We interpret this as a suggestion that Wadi Barjuj was blocked by the progressive development of Al Haruj al Aswad. K/Ar dating of lava flows suggests that this occurred between 4 and 2 Ma. This event would have increased the size of the closed basin in the Fazzan by about half, producing a catchment close to its current size (-350,000 km(2)). The Fazzan Basin contains a wealth of Pleistocene to recent palaeolake sediment outcrops and shorelines. Dating of these features demonstrates evidence of lacustrine conditions during numerous interglacials spanning a period greater than 420 ka. The middle to late Pleistocene interglacials were humid enough to produce a giant lake of about 135,000 km(2) that we have called Lake Megafazzan. Later lake phases were smaller, the interglacials less humid, developing lakes of a few thousand square kilometres. In parallel with these palaeohydrological developments in the Fazzan Basin, change was occurring in other parts of Libya. The Lower Pliocene sea level rise caused sediments to infill much of the Messinian channel system. As this was occurring, subsidence in the Al Kufrah Basin caused expansion of the Al Kufrah River system at the expense of the River Sahabi. By the Pleistocene, the Al Kufrah River dominated the palaeohydrology of eastern Libya and had developed a very large inland delta in its northern reaches that exhibited a complex distributary channel network which at times fed substantial lakes in the Sirt Basin. At this time Libya was a veritable lake district during humid periods with about 10% of the country underwater. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Urban land surface schemes have been developed to model the distinct features of the urban surface and the associated energy exchange processes. These models have been developed for a range of purposes and make different assumptions related to the inclusion and representation of the relevant processes. Here, the first results of Phase 2 from an international comparison project to evaluate 32 urban land surface schemes are presented. This is the first large-scale systematic evaluation of these models. In four stages, participants were given increasingly detailed information about an urban site for which urban fluxes were directly observed. At each stage, each group returned their models' calculated surface energy balance fluxes. Wide variations are evident in the performance of the models for individual fluxes. No individual model performs best for all fluxes. Providing additional information about the surface generally results in better performance. However, there is clear evidence that poor choice of parameter values can cause a large drop in performance for models that otherwise perform well. As many models do not perform well across all fluxes, there is need for caution in their application, and users should be aware of the implications for applications and decision making.
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
Project includes: a large scale live performance and resulting performance video, at Curtain Razors, Regina Queen’s Square, Regina, 2008 Live Performance, 45 mins, incl. 1 actor, 23 extras, 2 live cameras, live video and sound mixing, stage set, video projection. Video 45 mins Video Trailer 7 mins The Extras is a video performance referencing the form of a large live film shoot. The Extras contextualises contemporary Westerns genres within an experimental live tableau. The live performance and resulting 45 mins video make reference 19th century Western Author German Karl May, the tradition of Eastern European Western, (Red Western), Uranium exploitation and entrepreneurial cultures in the Canadian Prairies. Funded by the Canada Council for the Arts, Saskatchewan Arts Board and Curtain Razors, the Extras Regina was staged and performed at Central Plaza in Regina, with a crew of 23 extras, 2 live cameras, live video and sound mixing ad video projection. It involved research in Saskatchewan film and photographic archives. The performance was edited live and mixed with video material which was shot on location, with a further group of extras at historical historical ‘Western’ locations, including Fort Qu' Appelle, Castle Butte and Big Muddy. It also involved a collaboration with a local theatre production company, which enacted a dramatised historical incident.
Resumo:
Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element
Resumo:
Particle filters are fully non-linear data assimilation techniques that aim to represent the probability distribution of the model state given the observations (the posterior) by a number of particles. In high-dimensional geophysical applications the number of particles required by the sequential importance resampling (SIR) particle filter in order to capture the high probability region of the posterior, is too large to make them usable. However particle filters can be formulated using proposal densities, which gives greater freedom in how particles are sampled and allows for a much smaller number of particles. Here a particle filter is presented which uses the proposal density to ensure that all particles end up in the high probability region of the posterior probability density function. This gives rise to the possibility of non-linear data assimilation in large dimensional systems. The particle filter formulation is compared to the optimal proposal density particle filter and the implicit particle filter, both of which also utilise a proposal density. We show that when observations are available every time step, both schemes will be degenerate when the number of independent observations is large, unlike the new scheme. The sensitivity of the new scheme to its parameter values is explored theoretically and demonstrated using the Lorenz (1963) model.
Resumo:
The performance of La(2-x)Ce(x)Cu(1-y)Zn(y)O(4) perovskites as catalysts for the high temperature water-gas shift reaction (H T-W G S R) was investigated. The catalysts were characterized by EDS, XRD, BET surface area, TPR, and XANES. The results showed that all the perovskites exhibited the La(2)CuO(4) orthorhombic structure, so the Pechini method is suitable for the preparation of pure perovskite. However, the La(1.90)Ce(0.10)CuO(4) perovskite alone, when calcined at 350/700 degrees C, also showed a (La(0.935)Ce(0.065))(2)CuO(4) perovskite with tetragonal structure, which produced a surface area higher than the other perovskites. The perovskites that exhibited the best catalytic performance were those calcined at 350/700 degrees C and, among these, La(1.90)Ce(0.10)CuO(4) was outstanding, probably because of the high surface area associated with the presence of the (La(0.935)Ce(0.065))(2)CuO(4) perovskite with tetragonal structure and orthorhombic La(2)CuO(4) phase.
Resumo:
BDI agent languages provide a useful abstraction for complex systems comprised of interactive autonomous entities, but they have been used mostly in the context of single agents with a static plan library of behaviours invoked reactively. These languages provide a theoretically sound basis for agent design but are very limited in providing direct support for autonomy and societal cooperation needed for large scale systems. Some techniques for autonomy and cooperation have been explored in the past in ad hoc implementations, but not incorporated in any agent language. In order to address these shortcomings we extend the well known AgentSpeak(L) BDI agent language to include behaviour generation through planning, declarative goals and motivated goal adoption. We also develop a language-specific multiagent cooperation scheme and, to address potential problems arising from autonomy in a multiagent system, we extend our agents with a mechanism for norm processing leveraging existing theoretical work. These extensions allow for greater autonomy in the resulting systems, enabling them to synthesise new behaviours at runtime and to cooperate in non-scripted patterns.
Resumo:
In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for large-scale systems. Nonetheless, a critical obstacle, which needs to be overcome in MPC, is the large computational burden when a large-scale system is considered or a long prediction horizon is involved. In order to solve this problem, we use an adaptive prediction accuracy (APA) approach that can reduce the computational burden almost by half. The proposed MPC scheme with this scheme is tested on the northern Dutch water system, which comprises Lake IJssel, Lake Marker, the River IJssel and the North Sea Canal. The simulation results show that by using the MPC-APA scheme, the computational time can be reduced to a large extent and a flood protection problem over longer prediction horizons can be well solved.