854 resultados para large scale linear system
Resumo:
This paper discusses preconditioned Krylov subspace methods for solving large scale linear systems that originate from oil reservoir numerical simulations. Two types of preconditioners, one being based on an incomplete LU decomposition and the other being based on iterative algorithms, are used together in a combination strategy in order to achieve an adaptive and efficient preconditioner. Numerical tests show that different Krylov subspace methods combining with appropriate preconditioners are able to achieve optimal performance.
Resumo:
While load flow conditions vary with different loads, the small-signal stability of the entire system is closely related with to the locations, capacities and models of loads. In this paper, load impacts with different capacities and models on the small-signal stability are analysed. In the real large-scale power system case, the load sensitivity which denotes the sensitivity of the eigenvalue with respect to the load active power is introduced and applied to rank the loads. The loads with high sensitivity are also considered.
Resumo:
With the increasing utilization of electric vehicles (EVs), transportation systems and electrical power systems are becoming increasingly coupled. However, the interaction between these two kinds of systems are not well captured, especially from the perspective of transportation systems. This paper studies the reliability of integrated transportation and electrical power system (ITES). A bidirectional EV charging control strategy is first demonstrated to model the interaction between the two systems. Thereafter, a simplified transportation system model is developed, whose high efficiency makes the reliability assessment of the ITES realizable with an acceptable accuracy. Novel transportation system reliability indices are then defined from the view point of EV’s driver. Based on the charging control model and the transportation simulation method, a daily periodic quasi sequential reliability assessment method is proposed for the ITES system. Case studies based on RBTS system demonstrate that bidirectional charging controls of EVs will benefit the reliability of power systems, while decrease the reliability of EVs travelling. Also, the optimal control strategy can be obtained based on the proposed method. Finally, case studies are performed based on a large scale test system to verify the practicability of the proposed method.
Resumo:
The purpose of this study was to explore how transgender individuals were supported to navigate the healthcare system to achieve positive healthcare experiences. A single case study was conducted in Southern Ontario, which included ten individual interviews. Data was analyzed through thematic analysis, allowing for seven themes to emerge within macro (large-scale system), meso (local/interpersonal), and micro (individual/internal) levels of healthcare system support. Themes that emerged within the levels of system support included: 1) existing deficits with hope for change; 2) significant external supports; 3) importance of informal networking; 4) support from local area family physicians and walk-in clinics; 5) navigating the healthcare system alone; 6) personality traits for successful healthcare experiences; and 7) the development of strategies to achieve positive healthcare experiences. This study outlined factors that contributed to positive healthcare experiences for transgender individuals, showing that meso and micro level support are compensating for large-scale healthcare system deficits.
Resumo:
La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.
Resumo:
An idealised Pangean configuration is integrated in a coupled ocean atmosphere general circulation model to investigate the form of the ocean circulation and its impacts on the large scale climate system. A vigorous, hemispherically symmetric overturning is found, driven by deep water formation at high latitudes. Whilst the peak mass transport is around 100Sv, a low vertical temperature gradient in the ocean means that the maximum heat transport is only 1.2PW. The geographical change in the coupled model is found to produce a global average warming of 2°C, despite an increase in global surface albedo. This occurs through changes in the atmospheric water vapour and cloud distributions. There is also reduction in the equator-pole temperature gradient, largely attributable to the same causes, avoiding the paradox of low meridional temperature gradients without increased polar heat transport.
Resumo:
The Grid is a large-scale computer system that is capable of coordinating resources that are not subject to centralised control, whilst using standard, open, general-purpose protocols and interfaces, and delivering non-trivial qualities of service. In this chapter, we argue that Grid applications very strongly suggest the use of agent-based computing, and we review key uses of agent technologies in Grids: user agents, able to customize and personalise data; agent communication languages offering a generic and portable communication medium; and negotiation allowing multiple distributed entities to reach service level agreements. In the second part of the chapter, we focus on Grid service discovery, which we have identified as a prime candidate for use of agent technologies: we show that Grid-services need to be located via personalised, semantic-rich discovery processes, which must rely on the storage of arbitrary metadata about services that originates from both service providers and service users. We present UDDI-MT, an extension to the standard UDDI service directory approach that supports the storage of such metadata via a tunnelling technique that ties the metadata store to the original UDDI directory. The outcome is a flexible service registry which is compatible with existing standards and also provides metadata-enhanced service discovery.
Resumo:
This work presents a scalable and efficient parallel implementation of the Standard Simplex algorithm in the multicore architecture to solve large scale linear programming problems. We present a general scheme explaining how each step of the standard Simplex algorithm was parallelized, indicating some important points of the parallel implementation. Performance analysis were conducted by comparing the sequential time using the Simplex tableau and the Simplex of the CPLEXR IBM. The experiments were executed on a shared memory machine with 24 cores. The scalability analysis was performed with problems of different dimensions, finding evidence that our parallel standard Simplex algorithm has a better parallel efficiency for problems with more variables than constraints. In comparison with CPLEXR , the proposed parallel algorithm achieved a efficiency of up to 16 times better
Resumo:
A implementação convencional do método de migração por diferenças finitas 3D, usa a técnica de splitting inline e crossline para melhorar a eficiência computacional deste algoritmo. Esta abordagem torna o algoritmo eficiente computacionalmente, porém cria anisotropia numérica. Esta anisotropia numérica por sua vez, pode levar a falsos posicionamentos de refletores inclinados, especialmente refletores com grandes ângulos de mergulho. Neste trabalho, como objetivo de evitar o surgimento da anisotropia numérica, implementamos o operador de extrapolação do campo de onda para baixo sem usar a técnica splitting inline e crossline no domínio frequência-espaço via método de diferenças finitas implícito, usando a aproximação de Padé complexa. Comparamos a performance do algoritmo iterativo Bi-gradiente conjugado estabilizado (Bi-CGSTAB) com o multifrontal massively parallel solver (MUMPS) para resolver o sistema linear oriundo do método de migração por diferenças finitas. Verifica-se que usando a expansão de Padé complexa ao invés da expansão de Padé real, o algoritmo iterativo Bi-CGSTAB fica mais eficientes computacionalmente, ou seja, a expansão de Padé complexa atua como um precondicionador para este algoritmo iterativo. Como consequência, o algoritmo iterativo Bi-CGSTAB é bem mais eficiente computacionalmente que o MUMPS para resolver o sistema linear quando usado apenas um termo da expansão de Padé complexa. Para aproximações de grandes ângulos, métodos diretos são necessários. Para validar e avaliar as propriedades desses algoritmos de migração, usamos o modelo de sal SEG/EAGE para calcular a sua resposta ao impulso.
Resumo:
Implementações dos métodos de migração diferença finita e Fourier (FFD) usam fatoração direcional para acelerar a performance e economizar custo computacional. Entretanto essa técnica introduz anisotropia numérica que podem erroneamente posicionar os refletores em mergulho ao longo das direções em que o não foi aplicado a fatoração no operador de migração. Implementamos a migração FFD 3D, sem usar a técnica do fatoração direcional, no domínio da frequência usando aproximação de Padé complexa. Essa aproximação elimina a anisotropia numérica ao preço de maior custo computacional buscando a solução do campo de onda para um sistema linear de banda larga. Experimentos numéricos, tanto no modelo homogêneo e heterogêneo, mostram que a técnica da fatoração direcional produz notáveis erros de posicionamento dos refletores em meios com forte variação lateral de velocidade. Comparamos a performance de resolução do algoritmo de FFD usando o método iterativo gradiente biconjugado estabilizado (BICGSTAB) e o multifrontal massively parallel direct solver (MUMPS). Mostrando que a aproximação de Padé complexa é um eficiente precondicionador para o BICGSTAB, reduzindo o número de iterações em relação a aproximação de Padé real. O método iterativo BICGSTAB é mais eficiente que o método direto MUMPS, quando usamos apenas um termo da expansão de Padé complexa. Para maior ângulo de abertura do operador, mais termos da série são requeridos no operador de migração, e neste caso, a performance do método direto é mais eficiente. A validação do algoritmo e as propriedades da evolução computacional foram avaliadas para a resposta ao impulso do modelo de sal SEG/EAGE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Software dependencies play a vital role in programme comprehension, change impact analysis and other software maintenance activities. Traditionally, these activities are supported by source code analysis; however, the source code is sometimes inaccessible or difficult to analyse, as in hybrid systems composed of source code in multiple languages using various paradigms (e.g. object-oriented programming and relational databases). Moreover, not all stakeholders have adequate knowledge to perform such analyses. For example, non-technical domain experts and consultants raise most maintenance requests; however, they cannot predict the cost and impact of the requested changes without the support of the developers. We propose a novel approach to predicting software dependencies by exploiting the coupling present in domain-level information. Our approach is independent of the software implementation; hence, it can be used to approximate architectural dependencies without access to the source code or the database. As such, it can be applied to hybrid systems with heterogeneous source code or legacy systems with missing source code. In addition, this approach is based solely on information visible and understandable to domain users; therefore, it can be efficiently used by domain experts without the support of software developers. We evaluate our approach with a case study on a large-scale enterprise system, in which we demonstrate how up to 65 of the source code dependencies and 77% of the database dependencies are predicted solely based on domain information.
Resumo:
We investigate parallel algorithms for the solution of the Navier–Stokes equations in space-time. For periodic solutions, the discretized problem can be written as a large non-linear system of equations. This system of equations is solved by a Newton iteration. The Newton correction is computed using a preconditioned GMRES solver. The parallel performance of the algorithm is illustrated.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
As a result of their relative concentration towards the respective Atlantic margins, the silicic eruptives of the Parana (Brazil)-Etendeka large igneous province are disproportionately abundant in the Etendeka of Namibia. The NW Etendeka silicic units, dated at similar to132 Ma, occupy the upper stratigraphic levels of the volcanic sequences, restricted to the coastal zone, and comprise three latites and five quartz latites (QL). The large-volume Fria QL is the only low-Ti type. Its trace element and isotopic signatures indicate massive crustal input. The remaining NW Etendeka silicic units are enigmatic high-Ti types, geochemically different from low-Ti types. They exhibit chemical affinities with the temporally overlapping Khumib high-Ti basalt (see Ewart et al. Part 1) and high crystallization temperatures (greater than or equal to980 to 1120degreesC) inferred from augite and pigeonite phenocrysts, both consistent with their evolution from a mafic source. Geochemically, the high-Ti units define three groups, thought genetically related. We test whether these represent independent liquid lines of descent from a common high-Ti mafic parent. Although the recognition of latites reduces the apparent silica gap, difficulty is encountered in fractional crystallization models by the large volumes of two QL units. Numerical modelling does, however, support large-scale open-system fractional crystallization, assimilation of silicic to basaltic materials, and magma mixing, but cannot entirely exclude partial melting processes within the temporally active extensional environment. The fractional crystallization and mixing signatures add to the complexity of these enigmatic and controversial silicic magmas. The existence, however, of temporally and spatially overlapping high-Ti basalts is, in our view, not coincidental and the high-Ti character of the silicic magmas ultimately reflects a mantle signature.