1000 resultados para Electronic Tables
Resumo:
The present study is dedicated on forming skills for producing reflections from both students and teachers in studying the topic ‘Electronic Tables’ at school. This study provides a detailed explanation of the applications of the ALACT model where the process of reflection is realized via a cyclic model. An overlook is given on specific examples from the IT education which realize reflection.
Resumo:
The ~16-ka-long record of explosive eruptions from Shiveluch volcano (Kamchatka, NW Pacific) is refined using geochemical fingerprinting of tephra and radiocarbon ages. Volcanic glass from 77 prominent Holocene tephras and four Late Glacial tephra packages was analyzed by electron microprobe. Eruption ages were estimated using 113 radiocarbon dates for proximal tephra sequence. These radiocarbon dates were combined with 76 dates for regional Kamchatka marker tephra layers into a single Bayesian framework taking into account the stratigraphic ordering within and between the sites. As a result, we report ~1,700 high-quality glass analyses from Late Glacial–Holocene Shiveluch eruptions of known ages. These define the magmatic evolution of the volcano and provide a reference for correlations with distal fall deposits. Shiveluch tephras represent two major types of magmas, which have been feeding the volcano during the Late Glacial–Holocene time: Baidarny basaltic andesites and Young Shiveluch andesites. Baidarny tephras erupted mostly during the Late Glacial time (~16–12.8 ka BP) but persisted into the Holocene as subordinate admixture to the prevailing Young Shiveluch andesitic tephras (~12.7 ka BP–present). Baidarny basaltic andesite tephras have trachyandesite and trachydacite (SiO2 < 71.5 wt%) glasses. The Young Shiveluch andesite tephras have rhyolitic glasses (SiO2 > 71.5 wt%). Strongly calc-alkaline medium-K characteristics of Shiveluch volcanic glasses along with moderate Cl, CaO and low P2O5 contents permit reliable discrimination of Shiveluch tephras from the majority of other large Holocene tephras of Kamchatka. The Young Shiveluch glasses exhibit wave-like variations in SiO2 contents through time that may reflect alternating periods of high and low frequency/volume of magma supply to deep magma reservoirs beneath the volcano. The compositional variability of Shiveluch glass allows geochemical fingerprinting of individual Shiveluch tephra layers which along with age estimates facilitates their use as a dating tool in paleovolcanological, paleoseismological, paleoenvironmental and archeological studies. Electronic tables accompanying this work offer a tool for statistical correlation of unknown tephras with proximal Shiveluch units taking into account sectors of actual tephra dispersal, eruption size and expected age. Several examples illustrate the effectiveness of the new database. The data are used to assign a few previously enigmatic wide-spread tephras to particular Shiveluch eruptions. Our finding of Shiveluch tephras in sediment cores in the Bering Sea at a distance of ~600 km from the source permits re-assessment of the maximum dispersal distances for Shiveluch tephras and provides links between terrestrial and marine paleoenvironmental records.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
This paper describes a methodology that enables fast and reasonably accurate prediction of the reliability of power electronic modules featuring IGBTs and p-i-n diodes, by taking into account thermo-mechanical failure mechanisms of the devices and their associated packaging. In brief, the proposed simulation framework performs two main tasks which are tightly linked together: (i) the generation of the power devices' transient thermal response for realistic long load cycles and (ii) the prediction of the power modules' lifetime based on the obtained temperature profiles. In doing so the first task employs compact, physics-based device models, power losses lookup tables and polynomials and combined material-failure and thermal modelling, while the second task uses advanced reliability tests for failure mode and time-to-failure estimation. The proposed technique is intended to be utilised as a design/optimisation tool for reliable power electronic converters, since it allows easy and fast investigation of the effects that changes in circuit topology or devices' characteristics and packaging have on the reliability of the employed power electronic modules. © 2012 IEEE.
Resumo:
The creation of my hypermedia work Index of Love, which narrates a love story as an archive of moments, images and objects recollected, also articulated for me the potential of the book as electronic text. The book has always existed as both narrative and archive. Tables of contents and indexes allow the book to function simultaneously as linear narrative and non-linear, searchable database. The book therefore has more in common with the so-called 'new media' of the 21st century than it does with the dominant 20th century media of film, video and audiotape, whose logic and mode of distribution are resolutely linear. My thesis is that the non-linear logic of new media brings to the fore an aspect of the book - the index - whose potential for the production of narrative is only just beginning to be explored. When a reader/user accesses an electronic work, such as a website, via its menu, they simultaneously experience it as narrative and archive. The narrative journey taken is created through the menu choices made. Within the electronic book, therefore, the index (or menu) has the potential to function as more than just an analytical or navigational tool. It has the potential to become a creative, structuring device. This opens up new possibilities for the book, particularly as, in its paper based form, the book indexes factual work, but not fiction. In the electronic book, however, the index offers as rich a potential for fictional narratives as it does for factual volumes. [ABSTRACT FROM AUTHOR]
Resumo:
This paper presents single-chip FPGA Rijndael algorithm implementations of the Advanced Encryption Standard (AES) algorithm, Rijndael. In particular, the designs utilise look-up tables to implement the entire Rijndael Round function. A comparison is provided between these designs and similar existing implementations. Hardware implementations of encryption algorithms prove much faster than equivalent software implementations and since there is a need to perform encryption on data in real time, speed is very important. In particular, Field Programmable Gate Arrays (FPGAs) are well suited to encryption implementations due to their flexibility and an architecture, which can be exploited to accommodate typical encryption transformations. In this paper, a Look-Up Table (LUT) methodology is introduced where complex and slow operations are replaced by simple LUTs. A LUT-based fully pipelined Rijndael implementation is described which has a pre-placement performance of 12 Gbits/sec, which is a factor 1.2 times faster than an alternative design in which look-up tables are utilised to implement only one of the Round function transformations, and 6 times faster than other previous single-chip implementations. Iterative Rijndael implementations based on the Look-Up-Table design approach are also discussed and prove faster than typical iterative implementations.
Resumo:
Qualitative spatial reasoning (QSR) is an important field of AI that deals with qualitative aspects of spatial entities. Regions and their relationships are described in qualitative terms instead of numerical values. This approach models human based reasoning about such entities closer than other approaches. Any relationships between regions that we encounter in our daily life situations are normally formulated in natural language. For example, one can outline one's room plan to an expert by indicating which rooms should be connected to each other. Mereotopology as an area of QSR combines mereology, topology and algebraic methods. As mereotopology plays an important role in region based theories of space, our focus is on one of the most widely referenced formalisms for QSR, the region connection calculus (RCC). RCC is a first order theory based on a primitive connectedness relation, which is a binary symmetric relation satisfying some additional properties. By using this relation we can define a set of basic binary relations which have the property of being jointly exhaustive and pairwise disjoint (JEPD), which means that between any two spatial entities exactly one of the basic relations hold. Basic reasoning can now be done by using the composition operation on relations whose results are stored in a composition table. Relation algebras (RAs) have become a main entity for spatial reasoning in the area of QSR. These algebras are based on equational reasoning which can be used to derive further relations between regions in a certain situation. Any of those algebras describe the relation between regions up to a certain degree of detail. In this thesis we will use the method of splitting atoms in a RA in order to reproduce known algebras such as RCC15 and RCC25 systematically and to generate new algebras, and hence a more detailed description of regions, beyond RCC25.
Resumo:
La concertation est un phénomène récent, de plus en plus répandu. Elle s’applique à de nombreux domaines notamment en urbanisme et plus récemment à la protection du patrimoine. Elle semble être un outil approprié pour les autorités municipales afin de faire face aux conflits autour des projets d’aménagement particulièrement ceux liés à la protection du patrimoine. Notre questionnement porte sur l’apport de la concertation dans le domaine de la préservation du patrimoine et sur la pertinence des moyens mis en place pour atteindre un tel objectif. Les tables de concertation, en tant que processus de concertation, sont-elles appropriées pour la gestion des sites patrimoniaux ? À la lumière d’une discussion théorique sur le concept de la concertation en aménagement, nous faisons l’analyse comparative de deux Tables de concertation, celle du Vieux-Montréal et celle du Mont-Royal. Notre analyse porte sur l’évaluation du processus de concertation et sur la construction d’une vision globale pour le devenir des secteurs patrimoniaux concernés. L’objectif est de caractériser le processus de concertation utilisé à Montréal et d’en apprécier l’apport dans le domaine de la protection du patrimoine. L’analyse de nos deux cas d’étude révèle l’existence d’un processus de concertation propre à Montréal, avec ses caractéristiques spécifiques, mais qui reste à parfaire pour son optimisation. Notre recherche se conclut sur la nécessité d’améliorer le processus de concertation, tel qu’étudié, à travers un certain nombre de pistes à explorer.
Utilisation de splines monotones afin de condenser des tables de mortalité dans un contexte bayésien
Resumo:
Dans ce mémoire, nous cherchons à modéliser des tables à deux entrées monotones en lignes et/ou en colonnes, pour une éventuelle application sur les tables de mortalité. Nous adoptons une approche bayésienne non paramétrique et représentons la forme fonctionnelle des données par splines bidimensionnelles. L’objectif consiste à condenser une table de mortalité, c’est-à-dire de réduire l’espace d’entreposage de la table en minimisant la perte d’information. De même, nous désirons étudier le temps nécessaire pour reconstituer la table. L’approximation doit conserver les mêmes propriétés que la table de référence, en particulier la monotonie des données. Nous travaillons avec une base de fonctions splines monotones afin d’imposer plus facilement la monotonie au modèle. En effet, la structure flexible des splines et leurs dérivées faciles à manipuler favorisent l’imposition de contraintes sur le modèle désiré. Après un rappel sur la modélisation unidimensionnelle de fonctions monotones, nous généralisons l’approche au cas bidimensionnel. Nous décrivons l’intégration des contraintes de monotonie dans le modèle a priori sous l’approche hiérarchique bayésienne. Ensuite, nous indiquons comment obtenir un estimateur a posteriori à l’aide des méthodes de Monte Carlo par chaînes de Markov. Finalement, nous étudions le comportement de notre estimateur en modélisant une table de la loi normale ainsi qu’une table t de distribution de Student. L’estimation de nos données d’intérêt, soit la table de mortalité, s’ensuit afin d’évaluer l’amélioration de leur accessibilité.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
This study aims to examine the international value distribution structure among major East Asian economies and the US. The mainstream trade theory explains the gains from trade; however, global value chain (GVC) approach emphasises uneven benefits of globalization among trading partners. The present study is mainly based on this view, examining which economy gains the most and which the least from the East Asian production networks. Two key industries, i.e., electronics and automobile, are our principle focus. Input-output method is employed to trace the creation and flows of value-added within the region. A striking fact is that some ASEAN economies increasingly reduce their shares of value-added, taken by developed countries, particularly by Japan. Policy implications are discussed in the final section.
Resumo:
Bibliography: p. [15]