890 resultados para Context Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is a qualitative action research by its nature with elements of personal design in the form of a tangible model implementation framework construction. Utilized empirical data has been gathered via two questionnaires in relation to the arranged four workshop events with twelve individual participants. Five of them represented maintenance customers, three maintenance service providers and four equipment providers respectively. Further, there are two main research objectives in proportion to the two complementary focusing areas of this thesis. Firstly, the value-based life-cycle model, which first version has already been developed prior to this thesis, requires updating in order to increase its real-life applicability as an inter-firm decision-making tool in industrial maintenance. This first research objective is fulfilled by improving appearance, intelligibility and usability of the above-mentioned model. In addition, certain new features are also added. The workshop participants from the collaborating companies were reasonably pleased with made changes, although further attention will be required in future on the model’s intelligibility in particular as main results, charts and values were all reckoned as slightly hard to understand. Moreover, upgraded model’s appearance and added new features satisfied them the most. Secondly and more importantly, the premises of the model’s possible inter-firm implementation process need to be considered. This second research objective is delivered in two consecutive steps. At first, a bipartite open-books supported implementation framework is created and its different characteristics discussed in theory. Afterwards, the prerequisites and the pitfalls of increasing inter-organizational information transparency are studied in empirical context. One of the main findings was that the organizations are not yet prepared for network-wide information disclosure as dyadic collaboration was favored instead. However, they would be willing to share information bilaterally at least. Another major result was that the present state of companies’ cost accounting systems will definitely need implementation-wise enhancing in future since accurate and sufficiently detailed maintenance data is not available. Further, it will also be crucial to create supporting and mutually agreed network infrastructure. There are hardly any collaborative models, methods or tools currently in usage. Lastly, the essential questions about mutual trust and predominant purchasing strategies are cooperation-wise important. If inter-organizational activities are expanded, a more relational approach should be favored in this regard. Mutual trust was also recognized as a significant cooperation factor, but it is hard to measure in reality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this thesis was to examine how the emotional intelligence skills and multicultural project leadership style of a project manager interrelate and affect the success of a project. The research methods used are literature review in theoretical part of the thesis and semi-structured interviews in empirical part of the thesis. This study is a single case study i.e. one case company was selected to be the secondary level of analysis. Within the case company, four project managers were selected as research units to form the primary level of analysis. Literature review formed the basis for the empirical research and the interview questions were derived from the literature. Findings from the interviews were mirrored against the literature review findings, based on which both conclusions and generalisations could be made. Thus, both deductive and inductive methods were utilised to get more complete picture about the research topic. In the first part of the literature review the general leadership theories and the project leadership terminology are introduced as a background for the concept of emotional intelligence and the integrated leadership model. Emotional intelligence and its interrelation to different leadership concepts are discussed during the literature review. Chinese cultural aspects affecting the way of making business, and the multicultural leadership styles of the Finnish project managers are introduced in the following part of the literature review. It was found that the most successfully used multicultural leadership styles in Finnish-Chinese context are synergistic and polycentric, and these require emotional intelligence skills. In the empirical part on this thesis the findings from the semi-structured interviews are introduced, discussed and analysed. Interviews were done in private meeting rooms, and they were recorded and transcripted to add reliability and validity. Although the sample was only four project managers, the results show that the sample is quite saturated as the responses to several questions followed the same pattern. It was found that Finnish project managers in the case company are democratic and take cultural differences into account in their project leadership. Both synergistic and polycentric leadership styles are used with Chinese team members. Emotional intelligence capabilities and the emphasis of those differ a bit depending on the interviewee. Though, the results show that EI skills and the multicultural project leadership style used in Chinese context are interrelated. The findings from the literature review and the empirical research in this thesis are similar. Though, there is need for further research as the sample was small, and this thesis is a single case study. It is recommendable to make a multi-company study with larger sample of project managers. Also multi-industry perspective is recommendable for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A decade of studies on long-term habituation (LTH) in the crab Chasmagnathus is reviewed. Upon sudden presentation of a passing object overhead, the crab reacts with an escape response that habituates promptly and for at least five days. LTH proved to be an instance of associative memory and showed context, stimulus frequency and circadian phase specificity. A strong training protocol (STP) (³15 trials, intertrial interval (ITI) of 171 s) invariably yielded LTH, while a weak training protocol (WTP) (£10 trials, ITI = 171 s) invariably failed. STP was used with a presumably amnestic agent and WTP with a presumably hypermnestic agent. Remarkably, systemic administration of low doses was effective, which is likely to be due to the lack of an endothelial blood-brain barrier. LTH was blocked by inhibitors of protein and RNA synthesis, enhanced by protein kinase A (PKA) activators and reduced by PKA inhibitors, facilitated by angiotensin II and IV and disrupted by saralasin. The presence of angiotensins and related compounds in the crab brain was demonstrated. Diverse results suggest that LTH includes two components: an initial memory produced by spaced training and mainly expressed at an initial phase of testing, and a retraining memory produced by massed training and expressed at a later phase of testing (retraining). The initial memory would be associative, context specific and sensitive to cycloheximide, while the retraining memory would be nonassociative, context independent and insensitive to cycloheximide

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research concerns different statistical methods that assist to increase the demand forecasting accuracy of company X’s forecasting model. Current forecasting process was analyzed in details. As a result, graphical scheme of logical algorithm was developed. Based on the analysis of the algorithm and forecasting errors, all the potential directions for model future improvements in context of its accuracy were gathered into the complete list. Three improvement directions were chosen for further practical research, on their basis, three test models were created and verified. Novelty of this work lies in the methodological approach of the original analysis of the model, which identified its critical points, as well as the uniqueness of the developed test models. Results of the study formed the basis of the grant of the Government of St. Petersburg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software is a key component in many of our devices and products that we use every day. Most customers demand not only that their devices should function as expected but also that the software should be of high quality, reliable, fault tolerant, efficient, etc. In short, it is not enough that a calculator gives the correct result of a calculation, we want the result instantly, in the right form, with minimal use of battery, etc. One of the key aspects for succeeding in today's industry is delivering high quality. In most software development projects, high-quality software is achieved by rigorous testing and good quality assurance practices. However, today, customers are asking for these high quality software products at an ever-increasing pace. This leaves the companies with less time for development. Software testing is an expensive activity, because it requires much manual work. Testing, debugging, and verification are estimated to consume 50 to 75 per cent of the total development cost of complex software projects. Further, the most expensive software defects are those which have to be fixed after the product is released. One of the main challenges in software development is reducing the associated cost and time of software testing without sacrificing the quality of the developed software. It is often not enough to only demonstrate that a piece of software is functioning correctly. Usually, many other aspects of the software, such as performance, security, scalability, usability, etc., need also to be verified. Testing these aspects of the software is traditionally referred to as nonfunctional testing. One of the major challenges with non-functional testing is that it is usually carried out at the end of the software development process when most of the functionality is implemented. This is due to the fact that non-functional aspects, such as performance or security, apply to the software as a whole. In this thesis, we study the use of model-based testing. We present approaches to automatically generate tests from behavioral models for solving some of these challenges. We show that model-based testing is not only applicable to functional testing but also to non-functional testing. In its simplest form, performance testing is performed by executing multiple test sequences at once while observing the software in terms of responsiveness and stability, rather than the output. The main contribution of the thesis is a coherent model-based testing approach for testing functional and performance related issues in software systems. We show how we go from system models, expressed in the Unified Modeling Language, to test cases and back to models again. The system requirements are traced throughout the entire testing process. Requirements traceability facilitates finding faults in the design and implementation of the software. In the research field of model-based testing, many new proposed approaches suffer from poor or the lack of tool support. Therefore, the second contribution of this thesis is proper tool support for the proposed approach that is integrated with leading industry tools. We o er independent tools, tools that are integrated with other industry leading tools, and complete tool-chains when necessary. Many model-based testing approaches proposed by the research community suffer from poor empirical validation in an industrial context. In order to demonstrate the applicability of our proposed approach, we apply our research to several systems, including industrial ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Target of this study was to develop a total cost calculation model to compare all costs from manufacturing and logistics from own factories or from partner factories to global distribution centers in a case company. Especially the total cost calculation model was needed to simulate an own factory utilization effect in the total cost calculation context. This study consist of the theoretical literature review and the empirical case study. This study was completed using the constructive research approach. The result of this study was a new total cost calculation model. The new total cost calculation model includes not only all the costs caused by manufacturing and logistics, but also the relevant capital costs. Using the new total cost calculation model, case company is able to complete the total cost calculations taking into account the own factory utilization effect in different volume situations and volume shares between an own factory and a partner factory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many people would like to believe that nationalism is a thing of the past, a dinosaur belonging to some bygone, uncivilized era. Such a belief is not borne out by recent history, however. Nationalism occupies the political forum with as much force as ever. Yet, in many ways, it remains a mystery to us. The purpose of this study is to explore individual motivations involved in the rise of nationalism, in addition to the role of structural factors. The linkage employed in this exploration is the psychosocial phenomenon of self-identity, including emotions and self-esteem. We demonstrate how individual, socially-constructed self-identity accounts for why some people embrace nationalism while others eschew it. The methodology employed was theoretical and historical analyses of secondary sources and indepth interviews with subjects who had some connection with former Yugoslavia, the country utilized to test the new model. Our analyses yielded the result that current conceptualizations of nationalism from an exclusively macro or micro perspective are unsatisfactory; we require a more comprehensive approach wherein the two perspectives are integrated. Such an integration necessitates a bridge: hence, our new model, which rests on the psychosocial premise, offers a more useful conceptual tool for the understanding of nationalism. We conclude that nationalism is first and foremost a matter relating to individual social self-identity which takes place within a particular context where oppositional forces emerge from structural factors and our membership in a particular group becomes paramount.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One hundred and seventy-two subj ects participated in this quantitative, correlational survey which tested Hackman and Oldham's Job Characteristics Model in an educational setting. Subjects were Teaching Masters, Chairmen and Deans from an Ontario community college. The data were collected via mailed questionnaire, on all variables of the model. Several reliable, valid instruments were used to test the variables. Data analysis through Pearson correlation and stepwise multiple regression analyses revealed that core job characteristics predicted certain critical psychological states and that these critical psychological states, in turn were able to predict various personal and work outcomes but not absenteeism. The context variable, Satisfaction with Co-workers, was the only consistent moderating variable between core characteristics and critical psychological states; however, individual employee differences did moderate the relationship between critical psychological states and all of the personal and work outcomes except Internal Work Motivation. Two other moderator variables, Satisfaction with Context and Growth Need Strength, demonstrated an ability to predict the outcome General Job Satisfaction. The research suggests that this model may be used for job design and redesign purposes within the community college setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Architectural model of Moulton Hall Fine Arts Complex, Chapman College, Orange, California. Completed in 1975 (2 floors, 44,592 sq.ft.), this building is named in memory of an artist and patroness of the arts, Nellie Gail Moulton. Within this structure are the departments of Art, Communications, and Theatre/Dance as well as the Guggenheim Gallery and Waltmar Theatre. Model photographed by Rene Laursen, Santa Ana, California.