900 resultados para systems theory
Resumo:
Energy consumption and energy efficiency have become an issue. Energy consumption is rising all over the world and because of that, and the climate change, energy is becoming more and more expensive. Buildings are major consumers of energy, and inside the buildings the major consumers are heating, ventilation and air-conditioning systems. They usually run at constant speed without efficient control. In most cases HVAC equipment is also oversized. Traditionally heating, ventilation and air-conditioning systems have been sized to meet conditions that rarely occur. The theory part in this thesis represents the basics of life cycle costs and calculations for the whole life cycle of a system. It also represents HVAC systems, equipment, systems controls and ways to save energy in these systems. The empirical part of this thesis represents life cycle cost calculations for HVAC systems. With these calculations it is possible to compute costs for the whole life cycle for the wanted variables. Life cycle costs make it possible to compare which variable causes most of the costs from the whole life point of view. Life cycle costs were studied through two real life cases which were focused on two different kinds of HVAC systems. In both of these cases the renovations were already made, so that the comparison between the old and the new, now existing system would be easier. The study indicates that energy can be saved in HVAC systems by using variable speed drive as a control method.
Resumo:
Scientific studies regarding specifically references do not seem to exist. However, the utilization of references is an important practice for many companies involved in industrial marketing. The purpose of the study is to increase the understanding about the utilization of references in international industrial marketing in order to contribute to the development of a theory of reference behavior. Specifically, the modes of reference usage in industry, the factors affecting a supplier's reference behavior, and the question how references are actually utilized, are explored in the study. Due to the explorative nature of the study, a research design was followed where theory and empirical studies alternated. An Exploratory Framework was developed to guide a pilot case study that resulted in Framework 1. Results of the pilot study guided an expanded literature review that was used to develop first a Structural Framework and a Process Framework which were combined in Framework 2. Then, the second empirical phase of the case study was conducted in the same (pilot) case company. In this phase, Decision Systems Analysis (DSA) was used as the analysis method. The DSA procedure consists of three interviewing waves: initial interviews, reinterviews, and validating interviews. Four reference decision processes were identified, described and analyzed in the form of flowchart descriptions. The flowchart descriptions were used to explore new constructs and to develop new propositions to develop Framework 2 further. The quality of the study was ascertained by many actions in both empirical parts of the study. The construct validity of the study was ascertained by using multiple sources of evidence and by asking the key informant to review the pilot case report. The DSA method itself includes procedures assuring validity. Because of the choice to conduct a single case study, external validity was not even pursued. High reliability was pursued through detailed documentation and thorough reporting of evidence. It was concluded that the core of the concept of reference is a customer relationship regardless of the concrete forms a reference might take in its utilization. Depending on various contingencies, references might have various tasks inside the four roles of increasing 1) efficiency of sales and sales management, 2) efficiency of the business, 3) effectiveness of marketing activities, and 4) effectiveness in establishing, maintaining and enhancing customer relationships. Thus, references have not only external but internal tasks as well. A supplier's reference behavior might be affected by many hierarchical conditions. Additionally, the empirical study showed that the supplier can utilize its references as a continuous, all pervasive decision making process through various practices. The process includes both individual and unstructured decision making subprocesses. The proposed concept of reference can be used to guide a reference policy recommendable for companies for which the utilization of references is important. The significance of the study is threefold: proposing the concept of reference, developing a framework of a supplier's reference behavior and its short term process of utilizing references, and conceptual structuring of an unstructured and in industrial marketing important phenomenon to four roles.
Resumo:
We introduce a new notion for the deformation of Gabor systems. Such deformations are in general nonlinear and, in particular, include the standard jitter error and linear deformations of phase space. With this new notion we prove a strong deformation result for Gabor frames and Gabor Riesz sequences that covers the known perturbation and deformation results. Our proof of the deformation theorem requires a new characterization of Gabor frames and Gabor Riesz sequences. It is in the style of Beurling's characterization of sets of sampling for bandlimited functions and extends significantly the known characterization of Gabor frames 'without inequalities' from lattices to non-uniform sets.
Resumo:
In this Thesis I discuss the exact dynamics of simple non-Markovian systems. I focus on fundamental questions at the core of non-Markovian theory and investigate the dynamics of quantum correlations under non-Markovian decoherence. In the first context I present the connection between two different non-Markovian approaches, and compare two distinct definitions of non-Markovianity. The general aim is to characterize in exemplary cases which part of the environment is responsible for the feedback of information typical of non- Markovian dynamics. I also show how such a feedback of information is not always described by certain types of master equations commonly used to tackle non-Markovian dynamics. In the second context I characterize the dynamics of two qubits in a common non-Markovian reservoir, and introduce a new dynamical effect in a wellknown model, i.e., two qubits under depolarizing channels. In the first model the exact solution of the dynamics is found, and the entanglement behavior is extensively studied. The non-Markovianity of the reservoir and reservoirmediated-interaction between the qubits cause non-trivial dynamical features. The dynamical interplay between different types of correlations is also investigated. In the second model the study of quantum and classical correlations demonstrates the existence of a new effect: the sudden transition between classical and quantum decoherence. This phenomenon involves the complete preservation of the initial quantum correlations for long intervals of time of the order of the relaxation time of the system.
Resumo:
After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.
Resumo:
Being a top of high technology industries, the aerospace represents one of the most complex fields of study. While the competitiveness of aircraft systems’ manufacturers attracts a significant number of researchers, some of the issues remain to be a blank spot. One of those is the after-sale modernization. The master thesis investigates how this concept is related to the theory of competitive advantages. Finding the routes in the framework of complex technological systems’ lifecycle, the key drivers of the aircraft modernization market are revealed. The competitive positioning of players is defined through multiple case studies in a form of several in-depth interviews. The key result of the research is the conclusion that modernization should be considered as an inherent component of strategy of any aircraft systems’ manufacturer, while the master thesis aims to support managerial decision making.
Resumo:
In this paper is Analyzed the local dynamical behavior of a slewing flexible structure considering nonlinear curvature. The dynamics of the original (nonlinear) governing equations of motion are reduced to the center manifold in the neighborhood of an equilibrium solution with the purpose of locally study the stability of the system. In this critical point, a Hopf bifurcation occurs. In this region, one can find values for the control parameter (structural damping coefficient) where the system is unstable and values where the system stability is assured (periodic motion). This local analysis of the system reduced to the center manifold assures the stable / unstable behavior of the original system around a known solution.
Resumo:
Outsourcing is a common strategy for companies looking for cost savings and improvements in performance. This has been especially prevalent in logistics, where warehousing and transporting are typical targets for outsourcing. However, while the benefits from logistics outsourcing are clear on paper, there are several cases companies fail to reach these benefits. The most commonly cited reasons for this are poor information flow between the company and the third party logistics partner, and a lack of integration between the two partners. Uncertainty stems from lack of information, and it can cripple the whole outsourcing operation. This is where enterprise resource planning (ERP) systems step in, as they can have a significant role in improving the flow of information, and integration, which consequently mitigates uncertainty. The purpose of the study is to examine if ERP systems have an effect on a company's decision to outsource logistics operations. Along the rapid advancements in technology during the past decades, ERP systems have also evolved. Therefore, empirical research on the subject needs constant revision as it can quickly become outdated due to ERP systems having more advanced capabilities every year. The research was conducted using a qualitative single-case study of a Finnish manufacturing firm that had outsourced warehousing and transportation operations in the Swedish market. The empirical data was gathered with use of semi-structured interviews with three employees from the case company that were closely related to the outsourcing operation. The theoretical framework that was used to analyze the empirical data was based on Transaction Cost Economics theory. The results of the study were align with the theoretical framework, in that the ERP system of the case company was seen as an enabler for their logistics outsourcing operation. However, the full theoretical benefits from ERP systems concerning extended enterprise functionality and flexibility were not attained due to the case company having an older version of their ERP system. This emphasizes the importance of having up-to-date technology if you want to overcome the shortcomings of ERP systems in outsourcing situations.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
Since financial liberalization in the 1980s, non-profit maximizing, stakeholder-oriented banks have outperformed private banks in Europe. This article draws on empirical research, banking theory and theories of the firm to explain this apparent anomaly for neo-liberal policy and contemporary market-based banking theory. The realization of competitive advantages by alternative banks (savings banks, cooperative banks and development banks) has significant implications for conceptions of bank change, regulation and political economy.
Resumo:
This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.
Resumo:
In the traditional way, value is created by manufacturer or producer of a product without engaging the customers. So, traditionally value creation is a monopoly in the part of a manufacturer. After gathering all the raw materials the manufacturers are inserting value to a product. And the inserted value is recognized in the time of consuming the product. In the modern time though there is traditional way of value creation but with the increase of more educated, smart, and technically sound customers the idea of value creation has changed. Now, customers are also contributing in value creation as value co-creator even before the product is consumed. This scenario has been encountered in the thesis with the main purpose of how value is cocreated in smart phone operating systems. The purpose is further divided into the following supobjectives: o What is value co-creation in smart phone operating systems? o Who participates in value co-creation in smart phone operating systems? o What are the procedures that are involved in value co-creation in smart phone operating systems? The research was conducted as a qualitative desk study by observing two of the leading smart phone operating system providers. Data has been collected from the official discussion forum of both the operating system providers. Other general concepts relating to the purpose of the study has been encountered through literature review. The research findings reveal that customers and companies both together co-create value of anticipated level when they communicate and interact with each other. However, most of the time customer to customer interactions, dialogues and discussions that come out in the core conversation help the value co-creation. The value co-creation framework sets up the customer at the main focus of value creation theory. By nullifying the inherited notion that companies only create value within its boundary and provide it to their customers in exchange of currencies. Rationally, it has been commenced that the firms are merely compromising value propositions to its customers. But the value has been co-created in a point where offerings are combined and interacted with customers’ capabilities, knowledge, resources and perceptions. This new perspective has radically altered the prospect of firms towards its customers. Typically customers are now taking part in value cocreation as a crucial member.
Resumo:
The challenge the community college faces in helping meet the needs of the living open system of society is examined in this study. It is postulated that internalization student outcomes are required by society to reduce entropy and remain self-renewing. Such behavior is characterized as having an intrinsically motivated energy source and displays the seeking and conquering of challenge, the development of reflective knowledge and skill, full use of all capabilities, internal control, growth orientation, high self-esteem, relativistic thinking and competence. The development of a conceptual systems model that suggests how transactions among students, faculty and administration might occur to best meet the needs of internalization outcomes in students, and intrinsic motivation in faculty is a major purpose of this study. It is a speculative model that is based on a synthesis of a wide variety of variables. Empirical evidence, theoretical considerations, and speculative ideas are gathered together from researchers and theoretici.ans who are working on separate answers to questions of intrinsic motivation, internal control and environments that encourage their development. The model considers the effect administrators·have on faculty anq the corresponding effect faculty may have on students. The major concentration is on the administrator--teacher interface.For administrators the model may serve as a guide in planning effective transactions, and establishing system goals. The teacher is offered a means to coordinate actions toward a specific overall objective, and the administrator, teacher and researcher are invited to use the model to experiment, innovate, verify the assumptions on which the model is based, and raise additional hypotheses. Goals and history of the community colleges in Ontario are examined against current problems, previous progress and open system thinking. The nature of the person as a five part system is explored with emphasis on intrinsic motivation. The nature, operation, conceptualization, and value of this internal energy source is reviewed in detail. The current state of society, education and management theory are considered and the value of intrinsically motivating teaching tasks together with "system four" leadership style are featured. Evidence is reviewed that suggests intrinsically motivated faculty are needed, and "system four" leadership style is the kind of interaction-influence system needed to nurture intrinsic motivation in faculty.
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
In this thesis, I examined the relevance of dual-process theory to understanding forgiveness. Specifically, I argued that the internal conflict experienced by laypersons when forgiving (or finding themselves unable to forgive) and the discrepancies between existing definitions of forgiveness can currently be best understood through the lens of dual-process theory. Dual-process theory holds that individuals engage in two broad forms of mental processing corresponding to two systems, here referred to as System 1 and System 2. System 1 processing is automatic, unconscious, and operates through learned associations and heuristics. System 2 processing is effortful, conscious, and operates through rule-based and hypothetical thinking. Different definitions of forgiveness amongst both lay persons and scholars may reflect different processes within each system. Further, lay experiences with internal conflict concerning forgiveness may frequently result from processes within each system leading to different cognitive, affective, and behavioural responses. The study conducted for this thesis tested the hypotheses that processing within System 1 can directly affect one's likelihood to forgive, and that this effect is moderated by System 2 processing. I used subliminal conditioning to manipulate System 1 processing by creating positive or negative conditioned attitudes towards a hypothetical transgressor. I used working memory load (WML) to inhibit System 2 processing amongst half of the participants. The conditioning phase of the study failed and so no conclusions could be drawn regarding the roles of System 1 and System 2 in forgiveness. The implications of dual-process theory for forgiveness research and clinical practice, and directions for future research are discussed.