963 resultados para Picard iteration
Resumo:
The buried and semi-buried bunker, bulwark since the early eighteenth century against increasingly sophisticated forms of ordnance, emerged in increasing number in Europe throughout the twentieth century across a series of scales from the household Anderson shelter to the vast infrastructural works of the Maginot and Siegfried lines, or the Atlantic Wall. Its latest proliferation took place during the Cold War. From these perspectives, it is as emblematic of modernity as the department store, the great exhibition, the skyscraper or the machine-inspired domestic space advocated by Le Corbusier. It also represents the obverse, or perhaps a parodic iteration, of the preoccupations of early architectural modernism: a vast underground international style, cast in millions of tons of thick, reinforced concrete retaining walls, whose spatial relationship to the landscape above was strictly mediated through the periscope, the loop-hole, the range finder and the strategic necessity to both resist and facilitate the technologies and scopic regimes of weaponry. Embarking from Bunker Archaeology, this paper critically uncoils Paul Virillo’s observation, that once physically eclipsed in its topographical and technical settings, the bunker’s efficacy would mutate to other domains, retaining and remaking its meaning in another topology during the Cold War. ‘The essence of the new fortress’ he writes ‘is elsewhere, underfoot, invisible from here on in’. Shaped by this impulse, this paper seeks to render visible the bunker’s significance in a wider milieu and, in doing so, excavate some of the relationships between the physical artefact, its implications and its enduring metaphorical and perceptual ghosts.
Resumo:
Demand Response (DR) algorithms manipulate the energy consumption schedules of controllable loads so as to satisfy grid objectives. Implementation of DR algorithms using a centralised agent can be problematic for scalability reasons, and there are issues related to the privacy of data and robustness to communication failures. Thus it is desirable to use a scalable decentralised algorithm for the implementation of DR. In this paper, a hierarchical DR scheme is proposed for Peak Minimisation (PM) based on Dantzig-Wolfe Decomposition (DWD). In addition, a Time Weighted Maximisation option is included in the cost function which improves the Quality of Service for devices seeking to receive their desired energy sooner rather than later. The paper also demonstrates how the DWD algorithm can be implemented more efficiently through the calculation of the upper and lower cost bounds after each DWD iteration.
Resumo:
A periodic monitoring of the pavement condition facilitates a cost-effective distribution of the resources available for maintenance of the road infrastructure network. The task can be accurately carried out using profilometers, but such an approach is generally expensive. This paper presents a method to collect information on the road profile via accelerometers mounted in a fleet of non-specialist vehicles, such as police cars, that are in use for other purposes. It proposes an optimisation algorithm, based on Cross Entropy theory, to predict road irregularities. The Cross Entropy algorithm estimates the height of the road irregularities from vehicle accelerations at each point in time. To test the algorithm, the crossing of a half-car roll model is simulated over a range of road profiles to obtain accelerations of the vehicle sprung and unsprung masses. Then, the simulated vehicle accelerations are used as input in an iterative procedure that searches for the best solution to the inverse problem of finding road irregularities. In each iteration, a sample of road profiles is generated and an objective function defined as the sum of squares of differences between the ‘measured’ and predicted accelerations is minimized until convergence is reached. The reconstructed profile is classified according to ISO and IRI recommendations and compared to its original class. Results demonstrate that the approach is feasible and that a good estimate of the short-wavelength features of the road profile can be detected, despite the variability between the vehicles used to collect the data.
Resumo:
Nos últimos anos temos vindo a assistir a uma mudança na forma como a informação é disponibilizada online. O surgimento da web para todos possibilitou a fácil edição, disponibilização e partilha da informação gerando um considerável aumento da mesma. Rapidamente surgiram sistemas que permitem a coleção e partilha dessa informação, que para além de possibilitarem a coleção dos recursos também permitem que os utilizadores a descrevam utilizando tags ou comentários. A organização automática dessa informação é um dos maiores desafios no contexto da web atual. Apesar de existirem vários algoritmos de clustering, o compromisso entre a eficácia (formação de grupos que fazem sentido) e a eficiência (execução em tempo aceitável) é difícil de encontrar. Neste sentido, esta investigação tem por problemática aferir se um sistema de agrupamento automático de documentos, melhora a sua eficácia quando se integra um sistema de classificação social. Analisámos e discutimos dois métodos baseados no algoritmo k-means para o clustering de documentos e que possibilitam a integração do tagging social nesse processo. O primeiro permite a integração das tags diretamente no Vector Space Model e o segundo propõe a integração das tags para a seleção das sementes iniciais. O primeiro método permite que as tags sejam pesadas em função da sua ocorrência no documento através do parâmetro Social Slider. Este método foi criado tendo por base um modelo de predição que sugere que, quando se utiliza a similaridade dos cossenos, documentos que partilham tags ficam mais próximos enquanto que, no caso de não partilharem, ficam mais distantes. O segundo método deu origem a um algoritmo que denominamos k-C. Este para além de permitir a seleção inicial das sementes através de uma rede de tags também altera a forma como os novos centróides em cada iteração são calculados. A alteração ao cálculo dos centróides teve em consideração uma reflexão sobre a utilização da distância euclidiana e similaridade dos cossenos no algoritmo de clustering k-means. No contexto da avaliação dos algoritmos foram propostos dois algoritmos, o algoritmo da “Ground truth automática” e o algoritmo MCI. O primeiro permite a deteção da estrutura dos dados, caso seja desconhecida, e o segundo é uma medida de avaliação interna baseada na similaridade dos cossenos entre o documento mais próximo de cada documento. A análise de resultados preliminares sugere que a utilização do primeiro método de integração das tags no VSM tem mais impacto no algoritmo k-means do que no algoritmo k-C. Além disso, os resultados obtidos evidenciam que não existe correlação entre a escolha do parâmetro SS e a qualidade dos clusters. Neste sentido, os restantes testes foram conduzidos utilizando apenas o algoritmo k-C (sem integração de tags no VSM), sendo que os resultados obtidos indicam que a utilização deste algoritmo tende a gerar clusters mais eficazes.
Resumo:
Dissertação de Mestrado, Engenharia Biológica, Faculdade de Engenharia de Recursos Naturais, Universidade do Algarve, 2009
Resumo:
Ein sehr großer Anteil der in Rechensystemen auftretenden Fehler ereignet sich im Speicher. In dieser Arbeit wird ein zerlegungsorientiertes Modell entwickelt, das die Wechselwirkungen zwischen Speicherfehlern und Systemleistung untersucht. Zunächst wird das Speicherverhalten eines Auftrags durch ein mehrphasiges Independent-Reference-Modell charakterisiert. Dies dient als Grundlage eines Modells zum Auftreten von Störungen, in das Lastcharakteristika wie die Auftrags-Verweildauer, die Seitenzugriffs-Rate und die Paging-Rate eingehen. Anschließend kann die Wahrscheinlichkeit, mit der ein Speicherfehler entdeckt wird, berechnet werden. Die zur Behandlung von Speicherfehlern erforderlichen Maßnahmen bestimmen die mittlere durch Fehler induzierte Last. Die Wechselwirkungen zwischen Fehler- und Leistungsverhalten werden durch ein System nichtlinearer Gleichung beschrieben, für dessen Lösung ein iteratives Verfahren abgeleitet wird. Abschließend wird mit ausführlichen Beispielen das Modell erläutert und der Einfluß einiger Modell-Parameter auf Leistungs- und Zuverlässigkeitskenngrößen untersucht.
Resumo:
Ce mémoire a pour but d’examiner la façon dont s’est opérée la construction identitaire d’étudiants et d’étudiantes universitaires ayant subi de l’intimidation à l’école secondaire. Il vise en outre à mieux cerner le processus de résilience qui a conduit à la persévérance jusqu’à l’université. L’intimidation est une problématique sociale d’importance, qui touche entre 16,5 % et 36 % des élèves durant leur parcours scolaire (Beaumont, Leclerc, Frenette & Proulx, 2014; Conseil canadien sur l’apprentissage, 2008; Institut de la statistique du Québec, 2012). Sur le plan scientifique, cette problématique a été examinée sous différents angles, mais peu d’études se sont intéressées à la façon dont elle peut influencer le parcours scolaire et la construction identitaire des adolescents et adolescentes qui en ont été victimes. Pour réaliser ce mémoire, dix-huit étudiants universitaires ont été rencontrés dans le cadre d’entretiens individuels s’inspirant de l’approche biographique du récit de vie. L’angle d’approche choisi a permis de mettre en lumière « la vie après l’intimidation » et d’en dégager une typologie comprenant trois types de parcours. Le premier type, le parcours où la persévérance scolaire a été compromise, est caractérisé par le fait que l’intimidation a agi comme un frein à la poursuite d’un parcours scolaire positif. Le deuxième type, le parcours axé sur la transition, met en lumière des répercussions d’ordre contextuel. Puis, pour le parcours axé sur la réussite, l’intimidation a poussé les étudiants à s’investir davantage sur les plans scolaire et professionnel et à vivre des réussites. Par ailleurs, cette étude apporte un éclairage descriptif quant aux répercussions de l’intimidation sur la persévérance scolaire et le choix de carrière. Les résultats ont également permis d’appliquer un nouvel éclairage théorique à la construction identitaire des élèves qui subissent de l’intimidation, soit la théorie de contrôle identitaire (Kerpelman et coll., 1997).
Resumo:
The Effective Classroom Practice project aimed to identify key factors that contribute to effective teaching in primary and secondary phases of schooling in different socioeconomic contexts. This article addresses the ways in which qualitative and quantitative approaches were combined within an integrated design to provide a comprehensive methodology for the research purposes. Strategies for the study are discussed, followed by the challenges of combining complex statistics with individual stories, particularly in relation to the ongoing iteration between these different data sets, and issues of validity and reliability. The findings shed new light on the meanings and measurement of teachers’ effective classroom practice and the complex nature of, and relationships with, professional life phase, teacher identities, and school context.
Resumo:
In this paper, we propose a low-complexity architecture for the implementation of adaptive IQ-imbalance compensation in quadrature zero-IF receivers. Our blind IQ-compensation scheme jointly compensates for IQ phase and gain errors without the need for test/pilot tones. The proposed architecture employs early-termination of the iteration process; this enables the powering-down of the parts of the adaptive algorithm thereby saving power. The complexity, in terms of power-down efficiency is evaluated and shows a reduction by 37-50 % for 32-PSK and 37-58 % for 64-QAM modulated signals.
Resumo:
The iterative nature of turbo-decoding algorithms increases their complexity compare to conventional FEC decoding algorithms. Two iterative decoding algorithms, Soft-Output-Viterbi Algorithm (SOVA) and Maximum A posteriori Probability (MAP) Algorithm require complex decoding operations over several iteration cycles. So, for real-time implementation of turbo codes, reducing the decoder complexity while preserving bit-error-rate (BER) performance is an important design consideration. In this chapter, a modification to the Max-Log-MAP algorithm is presented. This modification is to scale the extrinsic information exchange between the constituent decoders. The remainder of this chapter is organized as follows: An overview of the turbo encoding and decoding processes, the MAP algorithm and its simplified versions the Log-MAP and Max-Log-MAP algorithms are presented in section 1. The extrinsic information scaling is introduced, simulation results are presented, and the performance of different methods to choose the best scaling factor is discussed in Section 2. Section 3 discusses trends and applications of turbo coding from the perspective of wireless applications.
Resumo:
This paper presents an artificial neural network applied to the forecasting of electricity market prices, with the special feature of being dynamic. The dynamism is verified at two different levels. The first level is characterized as a re-training of the network in every iteration, so that the artificial neural network can able to consider the most recent data at all times, and constantly adapt itself to the most recent happenings. The second level considers the adaptation of the neural network’s execution time depending on the circumstances of its use. The execution time adaptation is performed through the automatic adjustment of the amount of data considered for training the network. This is an advantageous and indispensable feature for this neural network’s integration in ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to the market negotiating players of MASCEM (Multi-Agent Simulator of Competitive Electricity Markets).
Resumo:
Mathematical Program with Complementarity Constraints (MPCC) finds applica- tion in many fields. As the complementarity constraints fail the standard Linear In- dependence Constraint Qualification (LICQ) or the Mangasarian-Fromovitz constraint qualification (MFCQ), at any feasible point, the nonlinear programming theory may not be directly applied to MPCC. However, the MPCC can be reformulated as NLP problem and solved by nonlinear programming techniques. One of them, the Inexact Restoration (IR) approach, performs two independent phases in each iteration - the feasibility and the optimality phases. This work presents two versions of an IR algorithm to solve MPCC. In the feasibility phase two strategies were implemented, depending on the constraints features. One gives more importance to the complementarity constraints, while the other considers the priority of equality and inequality constraints neglecting the complementarity ones. The optimality phase uses the same approach for both algorithm versions. The algorithms were implemented in MATLAB and the test problems are from MACMPEC collection.