77 resultados para Measurement-based quantum computing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the study is to find out how sales performance should be measured and how should sales be steered in a multinational company. The beginning of the study concentrates on the literature regarding sales, performance measurement, sales performance measurement, and sales steering. The empirical part of the study is a case study, in which the information was acquired from interviews with the key personnel of the company. The results of the interviews and the revealed problems were analyzed, and comparison for possible solutions was performed. When measuring sales performance, it is important to discover the specific needs and objectives for such a system. Specific needs should be highlighted in the design of the system. The system should be versatile and the structure of the system should be in line with the organizational structure. The role of the sales performance measurement system was seen to be important in helping sales steering. However, the importance of personal management and especially conversations were seen as really critical issue in the steering. Sales performance measurement could be based on the following perspectives: financial, market, customer, people, and future. That way the sales department could react to the environmental changes more rapidly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resonance energy transfer (RET) is a non-radiative transfer of the excitation energy from the initially excited luminescent donor to an acceptor. The requirements for the resonance energy transfer are: i) the spectral overlap between the donor emission spectrum and the acceptor absorption spectrum, ii) the close proximity of the donor and the acceptor, and iii) the suitable relative orientations of the donor emission and the acceptor absorption transition dipoles. As a result of the RET process the donor luminescence intensity and the donor lifetime are decreased. If the acceptor is luminescent, a sensitized acceptor emission appears. The rate of RET depends strongly on the donor–acceptor distance (r) and is inversely proportional to r6. The distance dependence of RET is utilized in binding assays. The proximity requirement and the selective detection of the RET-modified emission signal allow homogeneous separation free assays. The term lanthanide-based RET is used when luminescent lanthanide compounds are used as donors. The long luminescence lifetimes, the large Stokes’ shifts and the intense, sharply-spiked emission spectra of the lanthanide donors offer advantages over the conventional organic donor molecules. Both the organic lanthanide chelates and the inorganic up-converting phosphor (UCP) particles have been used as donor labels in the RET based binding assays. In the present work lanthanide luminescence and lanthanide-based resonance energy transfer phenomena were studied. Luminescence lifetime measurements had an essential role in the research. Modular frequency-domain and time-domain luminometers were assembled and used successfully in the lifetime measurements. The frequency-domain luminometer operated in the low frequency domain ( 100 kHz) and utilized a novel dual-phase lock-in detection of the luminescence. One of the studied phenomena was the recently discovered non-overlapping fluorescence resonance energy transfer (nFRET). The studied properties were the distance and temperature dependences of nFRET. The distance dependence was found to deviate from the Förster theory and a clear temperature dependence was observed whereas conventional RET was completely independent of the temperature. Based on the experimental results two thermally activated mechanisms were proposed for the nFRET process. The work with the UCP particles involved the measurement of the luminescence properties of the UCP particles synthesized in our laboratory. The goal of the UCP particle research is to develop UCP donor labels for binding assays. In the present work the effect of the dopant concentrations and the core–shell structure on the total up-conversion luminescence intensity, the red–green emission ratio, and the luminescence lifetime was studied. Also the non-radiative nature of the energy transfer from the UCP particle donors to organic acceptors was demonstrated for the first time in aqueous environment and with a controlled donor–acceptor distance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present manuscript represents the completion of a research path carried forward during my doctoral studies in the University of Turku. It contains information regarding my scientific contribution to the field of open quantum systems, accomplished in collaboration with other scientists. The main subject investigated in the thesis is the non-Markovian dynamics of open quantum systems with focus on continuous variable quantum channels, e.g. quantum Brownian motion models. Non-Markovianity is here interpreted as a manifestation of the existence of a flow of information exchanged by the system and environment during the dynamical evolution. While in Markovian systems the flow is unidirectional, i.e. from the system to the environment, in non-Markovian systems there are time windows in which the flow is reversed and the quantum state of the system may regain coherence and correlations previously lost. Signatures of a non-Markovian behavior have been studied in connection with the dynamics of quantum correlations like entanglement or quantum discord. Moreover, in the attempt to recognisee non-Markovianity as a resource for quantum technologies, it is proposed, for the first time, to consider its effects in practical quantum key distribution protocols. It has been proven that security of coherent state protocols can be enhanced using non-Markovian properties of the transmission channels. The thesis is divided in two parts: in the first part I introduce the reader to the world of continuous variable open quantum systems and non-Markovian dynamics. The second part instead consists of a collection of five publications inherent to the topic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis I discuss the dynamics of the quantum Brownian motion model in harmonic potential. This paradigmatic model has an exact solution, making it possible to consider also analytically the non-Markovian dynamics. The issues covered in this Thesis are themed around decoherence. First, I consider decoherence as the mediator of quantum-to-classical transition. I examine five different definitions for nonclassicality of quantum states, and show how each definition gives qualitatively different times for the onset of classicality. In particular I have found that all characterizations of nonclassicality, apart from one based on the interference term in the Wigner function, result in a finite, rather than asymptotic, time for the emergence of classicality. Second, I examine the diverse effects which coupling to a non-Markovian, structured reservoir, has on our system. By comparing different types of Ohmic reservoirs, I derive some general conclusions on the role of the reservoir spectrum in both the short-time and the thermalization dynamics. Finally, I apply these results to two schemes for decoherence control. Both of the methods are based on the non-Markovian properties of the dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This three-phase study was conducted to examine the effect of the Breast Cancer Patient’s Pathway program (BCPP) on breast cancer patients’ empowering process from the viewpoint of the difference between knowledge expectations and perceptions of received knowledge, knowledge level, quality of life, anxiety and treatment-related side effects during the breast cancer treatment process. The BCPP is an Internet-based patient education tool describing a flow chart of the patient pathway during the breast treatment process, from breast cancer diagnostic tests to the follow-up after treatments. The ultimate goal of this study was to evaluate the effect of the BCPP to the breast cancer patient’s empowerment by using the patient pathway as a patient education tool. In phase I, a systematic literature review was carried out to chart the solutions and outcomes of Internet-based educational programs for breast cancer patients. In phase II, a Delphi study was conducted to evaluate the usability of web pages and adequacy of their content. In phase III, the BCPP program was piloted with 10 patients and patients were randomised to an intervention group (n=50) and control group (n=48). According to the results of this study, the Internet is an effective patient education tool for increasing knowledge, and BCPP can be used as a patient education method supporting other education methods. However, breast cancer patients’ perceptions of received knowledge were not fulfilled; their knowledge expectations exceed the perceived amount of received knowledge. Although control group patients’ knowledge expectations were met better with the knowledge they received in hospital compared to the patients in the intervention group, no statistical differences were found between the groups in terms of quality of life, anxiety and treatment-related side effects. However, anxiety decreased faster in the intervention group when looking at internal differences between the groups at different measurement times. In the intervention group the relationship between the difference between knowledge expectations and perceptions of received knowledge correlated significantly with quality of life and anxiety. Their knowledge level was also significant higher than in the control group. These results support the theory that the empowering process requires patient’s awareness of knowledge expectations and perceptions of received knowledge. There is a need to develop patient education to meet patients’ perceptions of received knowledge, including oral and written education and BCPP, to fulfil patient’s knowledge expectations and facilitate the empowering process. Further research is needed on the process of cognitive empowerment with breast cancer patients. There is a need for new patient education methods to increase breast cancer patients’ awareness of knowing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this doctoral thesis, methods to estimate the expected power cycling life of power semiconductor modules based on chip temperature modeling are developed. Frequency converters operate under dynamic loads in most electric drives. The varying loads cause thermal expansion and contraction, which stresses the internal boundaries between the material layers in the power module. Eventually, the stress wears out the semiconductor modules. The wear-out cannot be detected by traditional temperature or current measurements inside the frequency converter. Therefore, it is important to develop a method to predict the end of the converter lifetime. The thesis concentrates on power-cycling-related failures of insulated gate bipolar transistors. Two types of power modules are discussed: a direct bonded copper (DBC) sandwich structure with and without a baseplate. Most common failure mechanisms are reviewed, and methods to improve the power cycling lifetime of the power modules are presented. Power cycling curves are determined for a module with a lead-free solder by accelerated power cycling tests. A lifetime model is selected and the parameters are updated based on the power cycling test results. According to the measurements, the factor of improvement in the power cycling lifetime of modern IGBT power modules is greater than 10 during the last decade. Also, it is noticed that a 10 C increase in the chip temperature cycle amplitude decreases the lifetime by 40%. A thermal model for the chip temperature estimation is developed. The model is based on power loss estimation of the chip from the output current of the frequency converter. The model is verified with a purpose-built test equipment, which allows simultaneous measurement and simulation of the chip temperature with an arbitrary load waveform. The measurement system is shown to be convenient for studying the thermal behavior of the chip. It is found that the thermal model has a 5 C accuracy in the temperature estimation. The temperature cycles that the power semiconductor chip has experienced are counted by the rainflow algorithm. The counted cycles are compared with the experimentally verified power cycling curves to estimate the life consumption based on the mission profile of the drive. The methods are validated by the lifetime estimation of a power module in a direct-driven wind turbine. The estimated lifetime of the IGBT power module in a direct-driven wind turbine is 15 000 years, if the turbine is located in south-eastern Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurement is a tool for researching. Therefore, it is important that the measuring process is carried out correctly, without distorting the signal or the measured event. Researches of thermoelectric phenomena have been focused more on transverse thermoelectric phenomena during recent decades. Transverse Seebeck effect enables to produce thinner and faster heat flux sensor than before. Studies about transverse Seebeck effect have so far focused on materials, so in this Master’s Thesis instrumentation of transverse Seebeck effect based heat flux sensor is studied, This Master’s Thesis examines an equivalent circuit of transverse Seebeck effect heat flux sensors, their connectivity to electronics and choosing and design a right type amplifier. The research is carried out with a case study which is Gradient Heat Flux Sensors and an electrical motor. In this work, a general equivalent circuit was presented for the transverse Seebeck effect-based heat flux sensor. An amplifier was designed for the sensor of the case study, and the solution was produced for the measurement of the local heat flux of the electric motor to improve the electromagnetic compatibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis discusses the phenomenology of the dynamics of open quantum systems marked by non-Markovian memory effects. Non-Markovian open quantum systems are the focal point of a flurry of recent research aiming to answer, e.g., the following questions: What is the characteristic trait of non-Markovian dynamical processes that discriminates it from forgetful Markovian dynamics? What is the microscopic origin of memory in quantum dynamics, and how can it be controlled? Does the existence of memory effects open new avenues and enable accomplishments that cannot be achieved with Markovian processes? These questions are addressed in the publications forming the core of this Thesis with case studies of both prototypical and more exotic models of open quantum systems. In the first part of the Thesis several ways of characterizing and quantifying non-Markovian phenomena are introduced. Their differences are then explored using a driven, dissipative qubit model. The second part of the Thesis focuses on the dynamics of a purely dephasing qubit model, which is used to unveil the origin of non-Markovianity for a wide class of dynamical models. The emergence of memory is shown to be strongly intertwined with the structure of the spectral density function, as further demonstrated in a physical realization of the dephasing model using ultracold quantum gases. Finally, as an application of memory effects, it is shown that non- Markovian dynamical processes facilitate a novel phenomenon of timeinvariant discord, where the total quantum correlations of a system are frozen to their initial value. Non-Markovianity can also be exploited in the detection of phase transitions using quantum information probes, as shown using the physically interesting models of the Ising chain in a transverse field and a Coulomb chain undergoing a structural phase transition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

JÄKÄLA-algoritmi (Jatkuvan Äänitehojakautuman algoritmi Käytävien Äänikenttien LAskentaan) ja sen NUMO- ja APPRO-laskentayhtälöt perustuvat käytävällä olevan todellisen äänilähteen kuvalähteiden symmetriaan. NUMO on algoritmin numeerisen ratkaisun ja APPRO likiarvoratkaisun laskentayhtälö. Algoritmia johdettaessa oletettiin, että absorptiomateriaali oli jakautunut tasaisesti käytävän ääntä heijastaville pinnoille. Suorakaiteen muotoisen käytävän kuvalähdetason muunto jatkuvaksi äänitehojakautumaksi sisältää kolme muokkausvaihetta. Aluksi suorakaiteen kuvalähdetaso muunnetaan neliön muotoiseksi. Seuraavaksi neliön muotoisen kuvalähdetason samanarvoiset kuvalähteet siirretään koordinaattiakselille diskreetiksi kuvalähdejonoksi. Lopuksi kuvalähdejono muunnetaan jatkuvaksi äänitehojakautumaksi, jolloin käytävän vastaanottopisteen äänenpainetaso voidaan laskea integroimalla jatkuvan äänitehojakautuman yli. JÄKÄLA-algoritmin validiteetin toteamiseksi käytettiin testattua kaupallista AKURI-ohjelmaa. AKURI-ohjelma antoi myös hyvän käsityksen siitä, miten NUMO- ja APPRO-yhtälöillä lasketut arvot mahdollisesti eroavat todellisilla käytävillä mitatuista arvoista. JÄKÄLA-algoritmin NUMO- ja APPRO-yhtälöitä testattiin myös vertaamalla niiden antamia tuloksia kolmen erityyppisen käytävän äänenpainetasomittauksiin. Tässä tutkimuksessa on osoitettu, että akustisen kuvateorian pohjalta on mahdollista johtaa laskenta-algoritmi, jota voidaan soveltaa pitkien käytävien äänikenttien pika-arvioinnissa paikan päällä. Sekä teoreettinen laskenta että käytännön äänenpainetasomittaukset todellisilla käytävillä osoittivat, että JÄKÄLA-algoritmin yhtälöiden ennustustarkkuus oli erinomainen ideaalikäytävillä ja hyvä niillä todellisilla käytävillä, joilla ei ollut ääntä heijastavia rakenteita. NUMO- ja APPRO-yhtälöt näyttäisivät toimivan hyvin käytävillä, joiden poikkileikkaus oli lähes neliön muotoinen ja joissa pintojen suurin absorptiokerroin oli korkeintaan kymmenen kertaa pienintä absorptiokerrointa suurempi. NUMO- ja APPRO-yhtälöiden suurin puute on, etteivät ne ota huomioon pintojen erilaisia absorptiokertoimia eivätkä esineistä heijastuvia ääniä. NUMO- ja APPRO- laskentayhtälöt poikkesivat mitatuista arvoista eniten käytävillä, joilla kahden vastakkaisen pinnan absorptiokerroin oli hyvin suuri ja toisen pintaparin hyvin pieni, ja käytävillä, joissa oli massiivisia, ääntä heijastavia pilareita ja palkkeja. JÄKÄLA-algoritmin NUMO- ja APPRO-yhtälöt antoivat tutkituilla käytävillä kuitenkin selvästi tarkempia arvoja kuin Kuttruffin likiarvoyhtälö ja tilastollisen huoneakustiikan perusyhtälö. JÄKÄLA-algoritmin laskentatarkkuutta on testattu vain neljällä todellisella käytävällä. Algoritmin kehittämiseksi tulisi jatkossa käytävän vastakkaisia pintoja ja niiden absorptiokertoimia käsitellä laskennassa pareittain. Algoritmin validiteetin varmistamiseksi on mittauksia tehtävä lisää käytävillä, joiden absorptiomateriaalien jakautumat poikkeavat toisistaan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis various aspects of memory effects in the dynamics of open quantum systems are studied. We develop a general theoretical framework for open quantum systems beyond the Markov approximation which allows us to investigate different sources of memory effects and to develop methods for harnessing them in order to realise controllable open quantum systems. In the first part of the Thesis a characterisation of non-Markovian dynamics in terms of information flow is developed and applied to study different sources of memory effects. Namely, we study nonlocal memory effects which arise due to initial correlations between two local environments and further the memory effects induced by initial correlations between the open system and the environment. The last part focuses on describing two all-optical experiment in which through selective preparation of the initial environment states the information flow between the system and the environment can be controlled. In the first experiment the system is driven from the Markovian to the non- Markovian regime and the degree of non-Markovianity is determined. In the second experiment we observe the nonlocal nature of the memory effects and provide a novel method to experimentally quantify frequency correlations in photonic environments via polarisation measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Master’s thesis agent-based modeling has been used to analyze maintenance strategy related phenomena. The main research question that has been answered was: what does the agent-based model made for this study tell us about how different maintenance strategy decisions affect profitability of equipment owners and maintenance service providers? Thus, the main outcome of this study is an analysis of how profitability can be increased in industrial maintenance context. To answer that question, first, a literature review of maintenance strategy, agent-based modeling and maintenance modeling and optimization was conducted. This review provided the basis for making the agent-based model. Making the model followed a standard simulation modeling procedure. With the simulation results from the agent-based model the research question was answered. Specifically, the results of the modeling and this study are: (1) optimizing the point in which a machine is maintained increases profitability for the owner of the machine and also the maintainer with certain conditions; (2) time-based pricing of maintenance services leads to a zero-sum game between the parties; (3) value-based pricing of maintenance services leads to a win-win game between the parties, if the owners of the machines share a substantial amount of their value to the maintainers; and (4) error in machine condition measurement is a critical parameter to optimizing maintenance strategy, and there is real systemic value in having more accurate machine condition measurement systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study concerns performance measurement and management in a collaborative network. Collaboration between companies has been increased in recent years due to the turbulent operating environment. The literature shows that there is a need for more comprehensive research on performance measurement in networks and the use of measurement information in their management. This study examines the development process and uses of a performance measurement system supporting performance management in a collaborative network. There are two main research questions: how to design a performance measurement system for a collaborative network and how to manage performance in a collaborative network. The work can be characterised as a qualitative single case study. The empirical data was collected in a Finnish collaborative network, which consists of a leading company and a reseller network. The work is based on five research articles applying various research methods. The research questions are examined at the network level and at the single network partner level. The study contributes to the earlier literature by producing new and deeper understanding of network-level performance measurement and management. A three-step process model is presented to support the performance measurement system design process. The process model has been tested in another collaborative network. The study also examines the factors affecting the process of designing the measurement system. The results show that a participatory development style, network culture, and outside facilitators have a positive effect on the design process. The study increases understanding of how to manage performance in a collaborative network and what kind of uses of performance information can be identified in a collaborative network. The results show that the performance measurement system is an applicable tool to manage the performance of a network. The results reveal that trust and openness increased during the utilisation of the performance measurement system, and operations became more transparent. The study also presents a management model that evaluates the maturity of performance management in a collaborative network. The model is a practical tool that helps to analyse the current stage of the performance management of a collaborative network and to develop it further.