894 resultados para mesh: Systems Theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Being a top of high technology industries, the aerospace represents one of the most complex fields of study. While the competitiveness of aircraft systems’ manufacturers attracts a significant number of researchers, some of the issues remain to be a blank spot. One of those is the after-sale modernization. The master thesis investigates how this concept is related to the theory of competitive advantages. Finding the routes in the framework of complex technological systems’ lifecycle, the key drivers of the aircraft modernization market are revealed. The competitive positioning of players is defined through multiple case studies in a form of several in-depth interviews. The key result of the research is the conclusion that modernization should be considered as an inherent component of strategy of any aircraft systems’ manufacturer, while the master thesis aims to support managerial decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper is Analyzed the local dynamical behavior of a slewing flexible structure considering nonlinear curvature. The dynamics of the original (nonlinear) governing equations of motion are reduced to the center manifold in the neighborhood of an equilibrium solution with the purpose of locally study the stability of the system. In this critical point, a Hopf bifurcation occurs. In this region, one can find values for the control parameter (structural damping coefficient) where the system is unstable and values where the system stability is assured (periodic motion). This local analysis of the system reduced to the center manifold assures the stable / unstable behavior of the original system around a known solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Outsourcing is a common strategy for companies looking for cost savings and improvements in performance. This has been especially prevalent in logistics, where warehousing and transporting are typical targets for outsourcing. However, while the benefits from logistics outsourcing are clear on paper, there are several cases companies fail to reach these benefits. The most commonly cited reasons for this are poor information flow between the company and the third party logistics partner, and a lack of integration between the two partners. Uncertainty stems from lack of information, and it can cripple the whole outsourcing operation. This is where enterprise resource planning (ERP) systems step in, as they can have a significant role in improving the flow of information, and integration, which consequently mitigates uncertainty. The purpose of the study is to examine if ERP systems have an effect on a company's decision to outsource logistics operations. Along the rapid advancements in technology during the past decades, ERP systems have also evolved. Therefore, empirical research on the subject needs constant revision as it can quickly become outdated due to ERP systems having more advanced capabilities every year. The research was conducted using a qualitative single-case study of a Finnish manufacturing firm that had outsourced warehousing and transportation operations in the Swedish market. The empirical data was gathered with use of semi-structured interviews with three employees from the case company that were closely related to the outsourcing operation. The theoretical framework that was used to analyze the empirical data was based on Transaction Cost Economics theory. The results of the study were align with the theoretical framework, in that the ERP system of the case company was seen as an enabler for their logistics outsourcing operation. However, the full theoretical benefits from ERP systems concerning extended enterprise functionality and flexibility were not attained due to the case company having an older version of their ERP system. This emphasizes the importance of having up-to-date technology if you want to overcome the shortcomings of ERP systems in outsourcing situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since financial liberalization in the 1980s, non-profit maximizing, stakeholder-oriented banks have outperformed private banks in Europe. This article draws on empirical research, banking theory and theories of the firm to explain this apparent anomaly for neo-liberal policy and contemporary market-based banking theory. The realization of competitive advantages by alternative banks (savings banks, cooperative banks and development banks) has significant implications for conceptions of bank change, regulation and political economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the traditional way, value is created by manufacturer or producer of a product without engaging the customers. So, traditionally value creation is a monopoly in the part of a manufacturer. After gathering all the raw materials the manufacturers are inserting value to a product. And the inserted value is recognized in the time of consuming the product. In the modern time though there is traditional way of value creation but with the increase of more educated, smart, and technically sound customers the idea of value creation has changed. Now, customers are also contributing in value creation as value co-creator even before the product is consumed. This scenario has been encountered in the thesis with the main purpose of how value is cocreated in smart phone operating systems. The purpose is further divided into the following supobjectives: o What is value co-creation in smart phone operating systems? o Who participates in value co-creation in smart phone operating systems? o What are the procedures that are involved in value co-creation in smart phone operating systems? The research was conducted as a qualitative desk study by observing two of the leading smart phone operating system providers. Data has been collected from the official discussion forum of both the operating system providers. Other general concepts relating to the purpose of the study has been encountered through literature review. The research findings reveal that customers and companies both together co-create value of anticipated level when they communicate and interact with each other. However, most of the time customer to customer interactions, dialogues and discussions that come out in the core conversation help the value co-creation. The value co-creation framework sets up the customer at the main focus of value creation theory. By nullifying the inherited notion that companies only create value within its boundary and provide it to their customers in exchange of currencies. Rationally, it has been commenced that the firms are merely compromising value propositions to its customers. But the value has been co-created in a point where offerings are combined and interacted with customers’ capabilities, knowledge, resources and perceptions. This new perspective has radically altered the prospect of firms towards its customers. Typically customers are now taking part in value cocreation as a crucial member.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The challenge the community college faces in helping meet the needs of the living open system of society is examined in this study. It is postulated that internalization student outcomes are required by society to reduce entropy and remain self-renewing. Such behavior is characterized as having an intrinsically motivated energy source and displays the seeking and conquering of challenge, the development of reflective knowledge and skill, full use of all capabilities, internal control, growth orientation, high self-esteem, relativistic thinking and competence. The development of a conceptual systems model that suggests how transactions among students, faculty and administration might occur to best meet the needs of internalization outcomes in students, and intrinsic motivation in faculty is a major purpose of this study. It is a speculative model that is based on a synthesis of a wide variety of variables. Empirical evidence, theoretical considerations, and speculative ideas are gathered together from researchers and theoretici.ans who are working on separate answers to questions of intrinsic motivation, internal control and environments that encourage their development. The model considers the effect administrators·have on faculty anq the corresponding effect faculty may have on students. The major concentration is on the administrator--teacher interface.For administrators the model may serve as a guide in planning effective transactions, and establishing system goals. The teacher is offered a means to coordinate actions toward a specific overall objective, and the administrator, teacher and researcher are invited to use the model to experiment, innovate, verify the assumptions on which the model is based, and raise additional hypotheses. Goals and history of the community colleges in Ontario are examined against current problems, previous progress and open system thinking. The nature of the person as a five part system is explored with emphasis on intrinsic motivation. The nature, operation, conceptualization, and value of this internal energy source is reviewed in detail. The current state of society, education and management theory are considered and the value of intrinsically motivating teaching tasks together with "system four" leadership style are featured. Evidence is reviewed that suggests intrinsically motivated faculty are needed, and "system four" leadership style is the kind of interaction-influence system needed to nurture intrinsic motivation in faculty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To investigate the thennal effects of latent heat in hydrothennal settings, an extension was made to the existing finite-element numerical modelling software, Aquarius. The latent heat algorithm was validated using a series of column models, which analysed the effects of penneability (flow rate), thennal gradient, and position along the two-phase curve (pressure). Increasing the flow rate and pressure increases displacement of the liquid-steam boundary from an initial position detennined without accounting for latent heat while increasing the thennal gradient decreases that displacement. Application to a regional scale model of a caldera-hosted hydrothennal system based on a representative suite of calderas (e.g., Yellowstone, Creede, Valles Grande) led to oscillations in the model solution. Oscillations can be reduced or eliminated by mesh refinement, which requires greater computation effort. Results indicate that latent heat should be accounted for to accurately model phase change conditions in hydrothennal settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, I examined the relevance of dual-process theory to understanding forgiveness. Specifically, I argued that the internal conflict experienced by laypersons when forgiving (or finding themselves unable to forgive) and the discrepancies between existing definitions of forgiveness can currently be best understood through the lens of dual-process theory. Dual-process theory holds that individuals engage in two broad forms of mental processing corresponding to two systems, here referred to as System 1 and System 2. System 1 processing is automatic, unconscious, and operates through learned associations and heuristics. System 2 processing is effortful, conscious, and operates through rule-based and hypothetical thinking. Different definitions of forgiveness amongst both lay persons and scholars may reflect different processes within each system. Further, lay experiences with internal conflict concerning forgiveness may frequently result from processes within each system leading to different cognitive, affective, and behavioural responses. The study conducted for this thesis tested the hypotheses that processing within System 1 can directly affect one's likelihood to forgive, and that this effect is moderated by System 2 processing. I used subliminal conditioning to manipulate System 1 processing by creating positive or negative conditioned attitudes towards a hypothetical transgressor. I used working memory load (WML) to inhibit System 2 processing amongst half of the participants. The conditioning phase of the study failed and so no conclusions could be drawn regarding the roles of System 1 and System 2 in forgiveness. The implications of dual-process theory for forgiveness research and clinical practice, and directions for future research are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In This Paper Several Additional Gmm Specification Tests Are Studied. a First Test Is a Chow-Type Test for Structural Parameter Stability of Gmm Estimates. the Test Is Inspired by the Fact That \"Taste and Technology\" Parameters Are Uncovered. the Second Set of Specification Tests Are Var Encompassing Tests. It Is Assumed That the Dgp Has a Finite Var Representation. the Moment Restrictions Which Are Suggested by Economic Theory and Exploited in the Gmm Procedure Represent One Possible Characterization of the Dgp. the Var Is a Different But Compatible Characterization of the Same Dgp. the Idea of the Var Encompassing Tests Is to Compare Parameter Estimates of the Euler Conditions and Var Representations of the Dgp Obtained Separately with Parameter Estimates of the Euler Conditions and Var Representations Obtained Jointly. There Are Several Ways to Construct Joint Systems Which Are Discussed in the Paper. Several Applications Are Also Discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Un objectif principal du génie logiciel est de pouvoir produire des logiciels complexes, de grande taille et fiables en un temps raisonnable. La technologie orientée objet (OO) a fourni de bons concepts et des techniques de modélisation et de programmation qui ont permis de développer des applications complexes tant dans le monde académique que dans le monde industriel. Cette expérience a cependant permis de découvrir les faiblesses du paradigme objet (par exemples, la dispersion de code et le problème de traçabilité). La programmation orientée aspect (OA) apporte une solution simple aux limitations de la programmation OO, telle que le problème des préoccupations transversales. Ces préoccupations transversales se traduisent par la dispersion du même code dans plusieurs modules du système ou l’emmêlement de plusieurs morceaux de code dans un même module. Cette nouvelle méthode de programmer permet d’implémenter chaque problématique indépendamment des autres, puis de les assembler selon des règles bien définies. La programmation OA promet donc une meilleure productivité, une meilleure réutilisation du code et une meilleure adaptation du code aux changements. Très vite, cette nouvelle façon de faire s’est vue s’étendre sur tout le processus de développement de logiciel en ayant pour but de préserver la modularité et la traçabilité, qui sont deux propriétés importantes des logiciels de bonne qualité. Cependant, la technologie OA présente de nombreux défis. Le raisonnement, la spécification, et la vérification des programmes OA présentent des difficultés d’autant plus que ces programmes évoluent dans le temps. Par conséquent, le raisonnement modulaire de ces programmes est requis sinon ils nécessiteraient d’être réexaminés au complet chaque fois qu’un composant est changé ou ajouté. Il est cependant bien connu dans la littérature que le raisonnement modulaire sur les programmes OA est difficile vu que les aspects appliqués changent souvent le comportement de leurs composantes de base [47]. Ces mêmes difficultés sont présentes au niveau des phases de spécification et de vérification du processus de développement des logiciels. Au meilleur de nos connaissances, la spécification modulaire et la vérification modulaire sont faiblement couvertes et constituent un champ de recherche très intéressant. De même, les interactions entre aspects est un sérieux problème dans la communauté des aspects. Pour faire face à ces problèmes, nous avons choisi d’utiliser la théorie des catégories et les techniques des spécifications algébriques. Pour apporter une solution aux problèmes ci-dessus cités, nous avons utilisé les travaux de Wiels [110] et d’autres contributions telles que celles décrites dans le livre [25]. Nous supposons que le système en développement est déjà décomposé en aspects et classes. La première contribution de notre thèse est l’extension des techniques des spécifications algébriques à la notion d’aspect. Deuxièmement, nous avons défini une logique, LA , qui est utilisée dans le corps des spécifications pour décrire le comportement de ces composantes. La troisième contribution consiste en la définition de l’opérateur de tissage qui correspond à la relation d’interconnexion entre les modules d’aspect et les modules de classe. La quatrième contribution concerne le développement d’un mécanisme de prévention qui permet de prévenir les interactions indésirables dans les systèmes orientés aspect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans cette thèse l’ancienne question philosophique “tout événement a-t-il une cause ?” sera examinée à la lumière de la mécanique quantique et de la théorie des probabilités. Aussi bien en physique qu’en philosophie des sciences la position orthodoxe maintient que le monde physique est indéterministe. Au niveau fondamental de la réalité physique – au niveau quantique – les événements se passeraient sans causes, mais par chance, par hasard ‘irréductible’. Le théorème physique le plus précis qui mène à cette conclusion est le théorème de Bell. Ici les prémisses de ce théorème seront réexaminées. Il sera rappelé que d’autres solutions au théorème que l’indéterminisme sont envisageables, dont certaines sont connues mais négligées, comme le ‘superdéterminisme’. Mais il sera argué que d’autres solutions compatibles avec le déterminisme existent, notamment en étudiant des systèmes physiques modèles. Une des conclusions générales de cette thèse est que l’interprétation du théorème de Bell et de la mécanique quantique dépend crucialement des prémisses philosophiques desquelles on part. Par exemple, au sein de la vision d’un Spinoza, le monde quantique peut bien être compris comme étant déterministe. Mais il est argué qu’aussi un déterminisme nettement moins radical que celui de Spinoza n’est pas éliminé par les expériences physiques. Si cela est vrai, le débat ‘déterminisme – indéterminisme’ n’est pas décidé au laboratoire : il reste philosophique et ouvert – contrairement à ce que l’on pense souvent. Dans la deuxième partie de cette thèse un modèle pour l’interprétation de la probabilité sera proposé. Une étude conceptuelle de la notion de probabilité indique que l’hypothèse du déterminisme aide à mieux comprendre ce que c’est qu’un ‘système probabiliste’. Il semble que le déterminisme peut répondre à certaines questions pour lesquelles l’indéterminisme n’a pas de réponses. Pour cette raison nous conclurons que la conjecture de Laplace – à savoir que la théorie des probabilités présuppose une réalité déterministe sous-jacente – garde toute sa légitimité. Dans cette thèse aussi bien les méthodes de la philosophie que de la physique seront utilisées. Il apparaît que les deux domaines sont ici solidement reliés, et qu’ils offrent un vaste potentiel de fertilisation croisée – donc bidirectionnelle.