987 resultados para Implementation complexity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Entrepreneurial marketing is newly established term and there is need for more specific studies in order to understand the concept fully. SMEs have entrepreneurial marketing elements more visible in their marketing and therefore provide more fruitful insights for this research. SMEs marketing has gained more recognition during the past years and in some cases innovative characteristics can be identified despite constraints such as lack of certain resources. The purpose of this research is to study entrepreneurial marketing characteristics and SME processes in order to wider understanding and gain more insights of entrepreneurial marketing. In addition, planning and implementation of entrepreneurial marketing processes is examined in order to gain full coverage of SMEs marketing activities. The research was conducted as a qualitative research and data gathering was based on semi-structured interview survey, which involved nine company interviews. Multiple case research was used to analyze data so that focus and clarity could be maintained in organized manner. Case companies were chosen from different business fields so that more variation and insights could be identified. The empirical results suggest that two examined processes networking and word-of-mouth communication are very important processes for case companies which supports the previous researches. However, the entrepreneurial marketing characteristics had variation some were more visible and recognizable than others. Examining more closely the processes companies did not fully understand that networking or word-of-mouth marketing could be used as efficiently as other conventional marketing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the market where companies similar in size and resources are competing, it is challenging to have any advantage over others. In order to stay afloat company needs to have capability to perform with fewer resources and yet provide better service. Hence development of efficient processes which can cut costs and improve performance is crucial. As business expands, processes become complicated and large amount of data needs to be managed and available on request. Different tools are used in companies to store and manage data, which facilitates better production and transactions. In the modern business world the most utilized tool for that purpose is ERP - Enterprise Resource Planning system. The focus of this research is to study how competitive advantage can be achieved by implementing proprietary ERP system in the company; ERP system that is in-house created, tailor made to match and align business needs and processes. Market is full of ERP software, but choosing the right one is a big challenge. Identifying the key features that need improvement in processes and data management, choosing the right ERP, implementing it and the follow-up is a long and expensive journey companies undergo. Some companies prefer to invest in a ready-made package bought from vendor and adjust it according to own business needs, while others focus on creating own system with in-house IT capabilities. In this research a case company is used and author tries to identify and analyze why organization in question decided to pursue the development of proprietary ERP system, how it has been implemented and whether it has been successful. Main conclusion and recommendation of this research is for companies to know core capabilities and constraints before choosing and implementing ERP system. Knowledge of factors that affect system change outcome is important, to make the right decisions on strategic level and implement on operational level. Duration of the project in the case company has lasted longer than anticipated. It has been reported that in cases of buying ready product from vendor, projects are delayed and completed over budget as well. In general, in case company implementation of proprietary ERP has been successful both from business performance figures and usability of system by employees. In terms of future research, conducting a study to calculate statistically ROI of both approaches; of buying ready product and creating own ERP will be beneficial.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

By recent years the phenomenon called crowdsourcing has been acknowledged as an innovative form of value creation that must be taken seriously. Crowdsourcing can be defined as an act of outsourcing tasks originally performed inside an organization, or assigned externally in form of a business relationship, to an undefinably large, heterogeneous mass of potential actors. This thesis constructs a framework for successful implementation of crowdsourcing initiatives. Firms that rely entirely on their own research and ideas cannot compete with the innovative capacity that crowd-powered firms have. Nowadays, crowdsourcing has become one of the key capabilities of businesses due to its innovative capabilities, in addition to the existing internal resources of the firm. By utilizing crowdsourcing the business gains access to an enormous pool of competence and knowledge. However, various risks remain such as uncertainty of crowd structure and loss of internal know-how. Crowdsourcing Success Framework introduces a step by step model for implementing crowdsourcing into the everyday operations of the business. It starts from the decision to utilize crowdsourcing and continues further into planning, organizing and execution. Finally, this thesis presents the success factors of crowdsourcing initiative.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of the study was to form a strategic process model and project management tool to help IFRS change implementation projects in the future. These research results were designed based on the theoretical framework of Total Quality Management and leaning on the facts that were collected during the empirical case study of IAS 17 change. The us-age of the process oriented approach in IFRS standard change implementation after the initial IFRS implementation is rationalized with the following arguments: 1) well designed process tools lead to optimization of resources 2) With the help of process stages and related tasks it is easy to ensure the efficient way of working and managing the project as well as make sure to include all necessary stakeholders to the change process. This research is following the qualitative approach and the analysis is in describing format. The first part of the study is a literature review and the latter part has been conducted as a case study. The data has been col-lected in the case company with interviews and observation. The main findings are a process model for IFRS standard change process and a check-list formatted management tool for up-coming IFRS standard change projects. The process flow follows the main cornerstones in IASB’s standard setting process and the management tool has been divided to stages accordingly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Physical exercise is associated with parasympathetic withdrawal and increased sympathetic activity resulting in heart rate increase. The rate of post-exercise cardiodeceleration is used as an index of cardiac vagal reactivation. Analysis of heart rate variability (HRV) and complexity can provide useful information about autonomic control of the cardiovascular system. The aim of the present study was to ascertain the association between heart rate decrease after exercise and HRV parameters. Heart rate was monitored in 17 healthy male subjects (mean age: 20 years) during the pre-exercise phase (25 min supine, 5 min standing), during exercise (8 min of the step test with an ascending frequency corresponding to 70% of individual maximal power output) and during the recovery phase (30 min supine). HRV analysis in the time and frequency domains and evaluation of a newly developed complexity measure - sample entropy - were performed on selected segments of heart rate time series. During recovery, heart rate decreased gradually but did not attain pre-exercise values within 30 min after exercise. On the other hand, HRV gradually increased, but did not regain rest values during the study period. Heart rate complexity was slightly reduced after exercise and attained rest values after 30-min recovery. The rate of cardiodeceleration did not correlate with pre-exercise HRV parameters, but positively correlated with HRV measures and sample entropy obtained from the early phases of recovery. In conclusion, the cardiodeceleration rate is independent of HRV measures during the rest period but it is related to early post-exercise recovery HRV measures, confirming a parasympathetic contribution to this phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Environmental threats are growing nowadays, they became global issues. People around the world try to face these issues by two means: solving the current affected environs and preventing non-affected environs. This thesis describes the design, implementation, and evaluation of online water quality monitoring system in Lake Saimaa, Finland. The water quality in Lake Saimaa needs to be monitored in order to provide responsible bodies with valuable information which allows them to act fast in order to prevent any negative impact on the lake's environment. The objectives were to design a suitable system, implement the system in Lake Saimaa, and then to evaluate the applicability and reliability of such systems for this environment. The needs for the system were first isolated, and then the design, needed modifications, and the construction of the system took place. After that was the testing of the system in Lake Saimaa in two locations nearby Mikkeli city. The last step was to evaluate the whole system. The main results were that the application of online water quality monitoring systems in Lake Saimaa can benefit of many advantages such as reducing the required manpower, time and running costs. However, the point of unreliability of the exact measured values of some parameters is still the drawback of such systems which can be developed by using more advanced equipments with more sophisticated features specifically for the purpose of monitoring in the predefined location.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of the study is to evaluate the impact of Lean Innovation management philosophy on the creativity potential of the large multinational enterprise. A theory of Lean Innovation indicates that the modern company in any industry can successfully combine both waste-decreasing approach and innovative potential promotion through creativity cultivation or, at least, preservation. The theoretical part of the work covers the main factors, pros and cons of Lean thinking and Innovation management separately, along with generalized new product development overview. While the modern international market becomes more accessible for entrepreneural initiatives, small enterprises and start-ups, large international corporations are more subject to adopt the Lean Innovation approach in both operational and product development sectors due to extended resources and capabilities. Moreover, a multinational enterprise is a highly probable pioneer in Lean innovation implementation. The empirical part of the thesis refers to a case of large European enterprise, operating in many markets around the globe, that currently undergoes innovation management adjustments and implementations in product development while already have related themselves with operational process optimization process through Lean thinking. A goal of the work is to understand what kind of difficulties and consequences a large international firm faces when dealing with Lean Innovation to improve own performance, if they can be sealed for generalized approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The brain is a complex system, which produces emergent properties such as those associated with activity-dependent plasticity in processes of learning and memory. Therefore, understanding the integrated structures and functions of the brain is well beyond the scope of either superficial or extremely reductionistic approaches. Although a combination of zoom-in and zoom-out strategies is desirable when the brain is studied, constructing the appropriate interfaces to connect all levels of analysis is one of the most difficult challenges of contemporary neuroscience. Is it possible to build appropriate models of brain function and dysfunctions with computational tools? Among the best-known brain dysfunctions, epilepsies are neurological syndromes that reach a variety of networks, from widespread anatomical brain circuits to local molecular environments. One logical question would be: are those complex brain networks always producing maladaptive emergent properties compatible with epileptogenic substrates? The present review will deal with this question and will try to answer it by illustrating several points from the literature and from our laboratory data, with examples at the behavioral, electrophysiological, cellular and molecular levels. We conclude that, because the brain is a complex system compatible with the production of emergent properties, including plasticity, its functions should be approached using an integrated view. Concepts such as brain networks, graphics theory, neuroinformatics, and e-neuroscience are discussed as new transdisciplinary approaches dealing with the continuous growth of information about brain physiology and its dysfunctions. The epilepsies are discussed as neurobiological models of complex systems displaying maladaptive plasticity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maintenance of thermal homeostasis in rats fed a high-fat diet (HFD) is associated with changes in their thermal balance. The thermodynamic relationship between heat dissipation and energy storage is altered by the ingestion of high-energy diet content. Observation of thermal registers of core temperature behavior, in humans and rodents, permits identification of some characteristics of time series, such as autoreference and stationarity that fit adequately to a stochastic analysis. To identify this change, we used, for the first time, a stochastic autoregressive model, the concepts of which match those associated with physiological systems involved and applied in male HFD rats compared with their appropriate standard food intake age-matched male controls (n=7 per group). By analyzing a recorded temperature time series, we were able to identify when thermal homeostasis would be affected by a new diet. The autoregressive time series model (AR model) was used to predict the occurrence of thermal homeostasis, and this model proved to be very effective in distinguishing such a physiological disorder. Thus, we infer from the results of our study that maximum entropy distribution as a means for stochastic characterization of temperature time series registers may be established as an important and early tool to aid in the diagnosis and prevention of metabolic diseases due to their ability to detect small variations in thermal profile.