967 resultados para Flail space model
Resumo:
In this paper we propose a latent variable model, in the spirit of Israilevich and Kuttner (1993), to measure regional manufacturing production. To test the validity of the proposed methodology, we have applied it for those Spanish regions that have a direct quantitative index. The results demonstrate the accuracy of the methodology proposed and show that it can overcome some of the difficulties of the indirect method applied by the INE, the Spanish National Institute of Statistics.
Resumo:
A new damage model based on a micromechanical analysis of cracked [± θ / 90n ]s laminates subjected to multiaxial loads is proposed. The model predicts the onset and accumulation of transverse matrix cracks in uniformly stressed laminates, the effect of matrix cracks on the stiffness of the laminate, as well as the ultimate failure of the laminate. The model also accounts for the effect of the ply thickness on the ply strength. Predictions relating the elastic properties of several laminates and multiaxial loads are presented
Resumo:
A continuum damage model for the prediction of damage onset and structural collapse of structures manufactured in fiber-reinforced plastic laminates is proposed. The principal damage mechanisms occurring in the longitudinal and transverse directions of a ply are represented by a damage tensor that is fixed in space. Crack closure under load reversal effects are taken into account using damage variables established as a function of the sign of the components of the stress tensor. Damage activation functions based on the LaRC04 failure criteria are used to predict the different damage mechanisms occurring at the ply level. The constitutive damage model is implemented in a finite element code. The objectivity of the numerical model is assured by regularizing the dissipated energy at a material point using Bazant’s Crack Band Model. To verify the accuracy of the approach, analyses ofcoupon specimens were performed, and the numerical predictions were compared with experimental data
Resumo:
A thermodynamically consistent damage model for the simulation of progressive delamination under variable mode ratio is presented. The model is formulated in the context of the Damage Mechanics. The constitutive equation that results from the definition of the free energy as a function of a damage variable is used to model the initiation and propagation of delamination. A new delamination initiation criterion is developed to assure that the formulation can account for changes in the loading mode in a thermodynamically consistent way. The formulation proposed accounts for crack closure effets avoiding interfacial penetration of two adjacent layers aftercomplete decohesion. The model is implemented in a finite element formulation. The numerical predictions given by the model are compared with experimental results
Resumo:
The objective of this master’s thesis is to investigate the loss behavior of three-level ANPC inverter and compare it with conventional NPC inverter. The both inverters are controlled with mature space vector modulation strategy. In order to provide the comparison both accurate and detailed enough NPC and ANPC simulation models should be obtained. The similar control model of SVM is utilized for both NPC and ANPC inverter models. The principles of control algorithms, the structure and description of models are clarified. The power loss calculation model is based on practical calculation approaches with certain assumptions. The comparison between NPC and ANPC topologies is presented based on results obtained for each semiconductor device, their switching and conduction losses and efficiency of the inverters. Alternative switching states of ANPC topology allow distributing losses among the switches more evenly, than in NPC inverter. Obviously, the losses of a switching device depend on its position in the topology. Losses distribution among the components in ANPC topology allows reducing the stress on certain switches, thus losses are equally distributed among the semiconductors, however the efficiency of the inverters is the same. As a new contribution to earlier studies, the obtained models of SVM control, NPC and ANPC inverters have been built. Thus, this thesis can be used in further more complicated modelling of full-power converters for modern multi-megawatt wind energy conversion systems.
Resumo:
A model for predicting temperature evolution for automatic controling systems in manufacturing processes requiring the coiling of bars in the transfer table is presented. Although the method is of a general nature, the presentation in this work refers to the manufacturing of steel plates in hot rolling mills. The predicting strategy is based on a mathematical model of the evolution of temperature in a coiling and uncoiling bar and is presented in the form of a parabolic partial differential equation for a shape changing domain. The mathematical model is solved numerically by a space discretization via geometrically adaptive finite elements which accomodate the change in shape of the domain, using a computationally novel treatment of the resulting thermal contact problem due to coiling. Time is discretized according to a Crank-Nicolson scheme. Since the actual physical process takes less time than the time required by the process controlling computer to solve the full mathematical model, a special predictive device was developed, in the form of a set of least squares polynomials, based on the off-line numerical solution of the mathematical model.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Pilocarpine-induced (320 mg/kg, ip) status epilepticus (SE) in adult (2-3 months) male Wistar rats results in extensive neuronal damage in limbic structures. Here we investigated whether the induction of a second SE (N = 6) would generate damage and cell loss similar to that seen after a first SE (N = 9). Counts of silver-stained (indicative of cell damage) cells, using the Gallyas argyrophil III method, revealed a markedly lower neuronal injury in animals submitted to re-induction of SE compared to rats exposed to a single episode of pilocarpine-induced SE. This effect could be explained as follows: 1) the first SE removes the vulnerable cells, leaving behind resistant cells that are not affected by the second SE; 2) the first SE confers increased resistance to the remaining cells, analogous to the process of ischemic tolerance. Counting of Nissl-stained cells was performed to differentiate between these alternative mechanisms. Our data indicate that different neuronal populations react differently to SE induction. For some brain areas most, if not all, of the vulnerable cells are lost after an initial insult leaving only relatively resistant cells and little space for further damage or cell loss. For some other brain areas, in contrast, our data support the hypothesis that surviving cells might be modified by the initial insult which would confer a sort of excitotoxic tolerance. As a consequence of both mechanisms, subsequent insults after an initial insult result in very little damage regardless of their intensity.
Resumo:
Product assurance is an essential part of product development process if developers want to ensure that final product is safe and reliable. Product assurance can be supported withrisk management and with different failure analysis methods. Product assurance is emphasized in system development process of mission critical systems. The product assurance process in systems of this kind requires extra attention. Inthis thesis, mission critical systems are space systems and the product assurance processof these systems is presented with help of space standards. The product assurance process can be supported with agile development because agile emphasizes transparency of the process and fast response to changes. Even if the development process of space systems is highly standardized and reminds waterfall model, it is still possible to adapt agile development in space systems development. This thesisaims to support the product assurance process of space systems with agile developmentso that the final product would be as safe and reliable as possible. The main purpose of this thesis is to examine how well product assurance is performed in Finnish space organizations and how product assurance tasks and activities can besupported with agile development. The research part of this thesis is performed in survey form.
Resumo:
Product assurance is an essential part of product development process if developers want to ensure that final product is safe and reliable. Product assurance can be supported with risk management and with different failure analysis methods. Product assurance is emphasized in system development process of mission critical systems. The product assurance process in systems of this kind requires extra attention. In this thesis, mission critical systems are space systems and the product assurance process of these systems is presented with help of space standards. The product assurance process can be supported with agile development because agile emphasizes transparency of the process and fast response to changes. Even if the development process of space systems is highly standardized and reminds waterfall model, it is still possible to adapt agile development in space systems development. This thesis aims to support the product assurance process of space systems with agile development so that the final product would be as safe and reliable as possible. The main purpose of this thesis is to examine how well product assurance is performed in Finnish space organizations and how product assurance tasks and activities can be supported with agile development. The research part of this thesis is performed in survey form.
Resumo:
Health Innovation Village at GE is one of the new communities targeted for startup and growth-oriented companies. It has been established at the premises of a multinational conglomerate that will promote networking and growth of startup companies. The concept combines features from traditional business incubators, accelerators, and coworking spaces. This research compares Health Innovation Village to these concepts regarding its goals, target clients, source of income, organization, facilities, management, and success factors. In addition, a new incubator classification model is introduced. On the other hand, Health Innovation Village is examined from its tenants’ perspective and improvements are suggested. The work was implemented as a qualitative case study by interviewing GE staff with connections to Health Innovation Village as well as startup entrepreneurs and employees’ working there. The most evident features of Health Innovation Village correspond to those of business incubators although it is atypical as a non-profit corporate business incubator. Strong network orientation and connections to venture capitalists are common characteristics of these new types of accelerators. The design of the premises conforms to the principles of coworking spaces, but the services provided to the startup companies are considerably more versatile than the services offered by coworking spaces. The advantages of Health Innovation Village are that there are first-class premises and exceptionally good networking possibilities that other types of incubators or accelerators are not able to offer. A conglomerate can also provide multifaceted special knowledge for young firms. In addition, both GE and the startups gained considerable publicity through their cooperation, indeed a characteristic that benefits both parties. Most of the expectations of the entrepreneurs were exceeded. However, communication and the scope of cooperation remain challenges. Micro companies spend their time developing and marketing their products and acquiring financing. Therefore, communication should be as clear as possible and accessible everywhere. The startups would prefer to cooperate significantly more, but few have the time available to assume the responsibility of leadership. The entrepreneurs also expected to have more possibilities for cooperation with GE. Wider collaboration might be accomplished by curation in the same way as it is used in the well-functioning coworking spaces where curators take care of practicalities and promote cooperation. Communication issues could be alleviated if the community had its own Intranet pages where all information could be concentrated. In particular, a common calendar and a room reservation system could be useful. In addition, it could be beneficial to have a section of the Intranet open for both the GE staff and the startups so that those willing to share their knowledge and those having project offers could use it for advertising.
Resumo:
The topic of this thesis is marginaVminority popular music and the question of identity; the term "marginaVminority" specifically refers to members of racial and cultural minorities who are socially and politically marginalized. The thesis argument is that popular music produced by members of cultural and racial minorities establishes cultural identity and resists racist discourse. Three marginaVminority popular music artists and their songs have been chosen for analysis in support of the argument: Gil Scott-Heron's "Gun," Tracy Chapman's "Fast Car" and Robbie Robertson's "Sacrifice." The thesis will draw from two fields of study; popular music and postcolonialism. Within the area of popular music, Theodor Adorno's "Standardization" theory is the focus. Within the area of postcolonialism, this thesis concentrates on two specific topics; 1) Stuart Hall's and Homi Bhabha's overlapping perspectives that identity is a process of cultural signification, and 2) Homi Bhabha's concept of the "Third Space." For Bhabha (1995a), the Third Space defines cultures in the moment of their use, at the moment of their exchange. The idea of identities arising out of cultural struggle suggests that identity is a process as opposed to a fixed center, an enclosed totality. Cultures arise from historical memory and memory has no center. Historical memory is de-centered and thus cultures are also de-centered, they are not enclosed totalities. This is what Bhabha means by "hybridity" of culture - that cultures are not unitary totalities, they are ways of knowing and speaking about a reality that is in constant flux. In this regard, the language of "Otherness" depends on suppressing or marginalizing the productive capacity of culture in the act of enunciation. The Third Space represents a strategy of enunciation that disrupts, interrupts and dislocates the dominant discursive construction of US and THEM, (a construction explained by Hall's concept of binary oppositions, detailed in Chapter 2). Bhabha uses the term "enunciation" as a linguistic metaphor for how cultural differences are articulated through discourse and thus how differences are discursively produced. Like Hall, Bhabha views culture as a process of understanding and of signification because Bhabha sees traditional cultures' struggle against colonizing cultures as transforming them. Adorno's theory of Standardization will be understood as a theoretical position of Western authority. The thesis will argue that Adorno's theory rests on the assumption that there is an "essence" to music, an essence that Adorno rationalizes as structure/form. The thesis will demonstrate that constructing music as possessing an essence is connected to ideology and power and in this regard, Adorno's Standardization theory is a discourse of White Western power. It will be argued that "essentialism" is at the root of Western "rationalization" of music, and that the definition of what constitutes music is an extension of Western racist "discourses" of the Other. The methodological framework of the thesis entails a) applying semiotics to each of the three songs examined and b) also applying Bhabha's model of the Third Space to each of the songs. In this thesis, semiotics specifically refers to Stuart Hall's retheorized semiotics, which recognizes the dual function of semiotics in the analysis of marginal racial/cultural identities, i.e., simultaneously represent embedded racial/cultural stereotypes, and the marginal raciaVcultural first person voice that disavows and thus reinscribes stereotyped identities. (Here, and throughout this thesis, "first person voice" is used not to denote the voice of the songwriter, but rather the collective voice of a marginal racial/cultural group). This dual function fits with Hall's and Bhabha's idea that cultural identity emerges out of cultural antagonism, cultural struggle. Bhabha's Third Space is also applied to each of the songs to show that cultural "struggle" between colonizers and colonized produces cultural hybridities, musically expressed as fusions of styles/sounds. The purpose of combining semiotics and postcolonialism in the three songs to be analyzed is to show that marginal popular music, produced by members of cultural and racial minorities, establishes cultural identity and resists racist discourse by overwriting identities of racial/cultural stereotypes with identities shaped by the first person voice enunciated in the Third Space, to produce identities of cultural hybridities. Semiotic codes of embedded "Black" and "Indian" stereotypes in each song's musical and lyrical text will be read and shown to be overwritten by the semiotic codes of the first person voice, which are decoded with the aid of postcolonial concepts such as "ambivalence," "hybridity" and "enunciation."
Resumo:
We study the workings of the factor analysis of high-dimensional data using artificial series generated from a large, multi-sector dynamic stochastic general equilibrium (DSGE) model. The objective is to use the DSGE model as a laboratory that allow us to shed some light on the practical benefits and limitations of using factor analysis techniques on economic data. We explain in what sense the artificial data can be thought of having a factor structure, study the theoretical and finite sample properties of the principal components estimates of the factor space, investigate the substantive reason(s) for the good performance of di¤usion index forecasts, and assess the quality of the factor analysis of highly dissagregated data. In all our exercises, we explain the precise relationship between the factors and the basic macroeconomic shocks postulated by the model.
Resumo:
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.