931 resultados para cryptographic protocols
Resumo:
Nineteen studies met the inclusion criteria. A skin temperature reduction of 5–15 °C, in accordance with the recent PRICE (Protection, Rest, Ice, Compression and Elevation) guidelines, were achieved using cold air, ice massage, crushed ice, cryotherapy cuffs, ice pack, and cold water immersion. There is evidence supporting the use and effectiveness of thermal imaging in order to access skin temperature following the application of cryotherapy. Thermal imaging is a safe and non-invasive method of collecting skin temperature. Although further research is required, in terms of structuring specific guidelines and protocols, thermal imaging appears to be an accurate and reliable method of collecting skin temperature data following cryotherapy. Currently there is ambiguity regarding the optimal skin temperature reductions in a medical or sporting setting. However, this review highlights the ability of several different modalities of cryotherapy to reduce skin temperature.
Resumo:
Medical industries have brought Information Technology (IT) in their systems for both patients and medical staffs due to the numerous benefits of IT we experience at presently. Moreover, the Mobile healthcare (M-health) system has been developed as the first step of Ubiquitous Health Environment (UHE). With the mobility and multi-functions, M-health system will be able to provide more efficient and various services for both doctors and patients. Due to the invisible feature of mobile signals, hackers have easier access to hospital networks than wired network systems. This may result in several security incidents unless security protocols are well implemented. In this paper, user authentication and authorization procedures will applied as a featured component at each level of M-health systems inthe hospital environment. Accordingly, M-health system in the hospital will meet the optimal requirements as a countermeasure to its vulnerabilities.
Resumo:
Waitrose has a strong commitment to organic farming but also uses products from 'conventional' farms. At the production stage, Waitrose own-label products are fully traceable, GM-free and all suppliers undergo a detailed assessment programme based on current best practice. Crop suppliers to Waitrose operate an authenticity programme to certify that each assignment is GM-free and produce is screened for pesticide residues. Waitrose sources conventional crops grown from 'Integrated Crop Management Systems' (ICMS) using best horticultural practices. The 'Assured Product' scheme regulates all UK produce to ICMS standards and these audits are being extended worldwide. Business is withdrawn from suppliers who fail the audit. In relation to this, Waitrose has increased its Fairtrade range as in its view 'Buying these products provides direct additional benefit to workers in the developing countries where they are produced and assists marginal producers by giving them access to markets they would not otherwise have'. Currently, Waitrose is developing its own sustainable timber assessment criteria. For livestock, protocols are in place to ensure that animals are reared under the 'most natural conditions possible' and free range produce is offered where animals have access to open space although some produce is not from free-range animals. Waitrose also use a 'Hazards Analysis Critical Points' system to identify food safety hazards that occur at any stage from production to point of sale and to ensure that full measures are in place to control them. In addition, mechanisms have been implemented to reduce fuel use and hence reduce CO2 emissions in the transport of products and staff, and to increase the energy use efficiency of refrigeration systems which account for approximately 60% of Waitrose energy use.
Resumo:
A Cooperative Collision Warning System (CCWS) is an active safety techno- logy for road vehicles that can potentially reduce traffic accidents. It provides a driver with situational awareness and early warnings of any possible colli- sions through an on-board unit. CCWS is still under active research, and one of the important technical problems is safety message dissemination. Safety messages are disseminated in a high-speed mobile environment using wireless communication technology such as Dedicated Short Range Communication (DSRC). The wireless communication in CCWS has a limited bandwidth and can become unreliable when used inefficiently, particularly given the dynamic nature of road traffic conditions. Unreliable communication may significantly reduce the performance of CCWS in preventing collisions. There are two types of safety messages: Routine Safety Messages (RSMs) and Event Safety Messages (ESMs). An RSM contains the up-to-date state of a vehicle, and it must be disseminated repeatedly to its neighbouring vehicles. An ESM is a warning message that must be sent to all the endangered vehi- cles. Existing RSM and ESM dissemination schemes are inefficient, unscalable, and unable to give priority to vehicles in the most danger. Thus, this study investigates more efficient and scalable RSM and ESM dissemination schemes that can make use of the context information generated from a particular traffic scenario. Therefore, this study tackles three technical research prob- lems, vehicular traffic scenario modelling and context information generation, context-aware RSM dissemination, and context-aware ESM dissemination. The most relevant context information in CCWS is the information about possible collisions among vehicles given a current vehicular traffic situation. To generate the context information, this study investigates techniques to model interactions among multiple vehicles based on their up-to-date motion state obtained via RSM. To date, there is no existing model that can represent interactions among multiple vehicles in a speciffic region and at a particular time. The major outcome from the first problem is a new interaction graph model that can be used to easily identify the endangered vehicles and their danger severity. By identifying the endangered vehicles, RSM and ESM dis- semination can be optimised while improving safety at the same time. The new model enables the development of context-aware RSM and ESM dissemination schemes. To disseminate RSM efficiently, this study investigates a context-aware dis- semination scheme that can optimise the RSM dissemination rate to improve safety in various vehicle densities. The major outcome from the second problem is a context-aware RSM dissemination protocol. The context-aware protocol can adaptively adjust the dissemination rate based on an estimated channel load and danger severity of vehicle interactions given by the interaction graph model. Unlike existing RSM dissemination schemes, the proposed adaptive scheme can reduce channel congestion and improve safety by prioritising ve- hicles that are most likely to crash with other vehicles. The proposed RSM protocol has been implemented and evaluated by simulation. The simulation results have shown that the proposed RSM protocol outperforms existing pro- tocols in terms of efficiency, scalability and safety. To disseminate ESM efficiently, this study investigates a context-aware ESM dissemination scheme that can reduce unnecessary transmissions and deliver ESMs to endangered vehicles as fast as possible. The major outcome from the third problem is a context-aware ESM dissemination protocol that uses a multicast routing strategy. Existing ESM protocols use broadcast rout- ing, which is not efficient because ESMs may be sent to a large number of ve- hicles in the area. Using multicast routing improves efficiency because ESMs are sent only to the endangered vehicles. The endangered vehicles can be identified using the interaction graph model. The proposed ESM protocol has been implemented and evaluated by simulation. The simulation results have shown that the proposed ESM protocol can prevent potential accidents from occurring better than existing ESM protocols. The context model and the RSM and ESM dissemination protocols can be implemented in any CCWS development to improve the communication and safety performance of CCWS. In effect, the outcomes contribute to the realisation of CCWS that will ultimately improve road safety and save lives.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.
Resumo:
Denaturation of tissues can provide a unique biological environment for regenerative medicine application only if minimal disruption of their microarchitecture is achieved during the decellularization process. The goal is to keep the structural integrity of such a construct as functional as the tissues from which they were derived. In this work, cartilage-on-bone laminates were decellularized through enzymatic, non-ionic and ionic protocols. This work investigated the effects of decellularization process on the microarchitecture of cartiligous extracellular matrix; determining the extent of how each process deteriorated the structural organization of the network. High resolution microscopy was used to capture cross-sectional images of samples prior to and after treatment. The variation of the microarchitecture was then analysed using a well defined fast Fourier image processing algorithm. Statistical analysis of the results revealed how significant the alternations among aforementioned protocols were (p < 0.05). Ranking the treatments by their effectiveness in disrupting the ECM integrity, they were ordered as: Trypsin> SDS> Triton X-100.
Resumo:
Client puzzles are cryptographic problems that are neither easy nor hard to solve. Most puzzles are based on either number theoretic or hash inversions problems. Hash-based puzzles are very efficient but so far have been shown secure only in the random oracle model; number theoretic puzzles, while secure in the standard model, tend to be inefficient. In this paper, we solve the problem of constucting cryptographic puzzles that are secure int he standard model and are very efficient. We present an efficient number theoretic puzzle that satisfies the puzzle security definition of Chen et al. (ASIACRYPT 2009). To prove the security of our puzzle, we introduce a new variant of the interval discrete logarithm assumption which may be of independent interest, and show this new problem to be hard under reasonable assumptions. Our experimental results show that, for 512-bit modulus, the solution verification time of our proposed puzzle can be up to 50x and 89x faster than the Karame-Capkum puzzle and the Rivest et al.'s time-lock puzzle respectively. In particular, the solution verification tiem of our puzzle is only 1.4x slower than that of Chen et al.'s efficient hash based puzzle.
Resumo:
Adopting a model of job enrichment we report on a longitudinal case investigating the perceived impact of an Enterprise Resource Planning (ERP) system on user job design characteristics. Our results indicated that in the context of an ERP geared towards centralisation and standardisation the extent to which users perceived an increase or decrease in job enrichment was associated with aspects such as formal authority and the nature of their work role. Experienced operational employees proficient in the original legacy system perceived ERP system protocols to constrain their actions, limit training and increase dependence on others in the workflow. Conversely, managerial users reported a number of benefits relating to report availability, improved organisational transparency and increased overall job enrichment. These results supported our argument concerning the relationship between ERPs with a standardisation intent and positive job enrichment outcomes for managerial users and negative job-related outcomes for operational users.
Resumo:
Over the last twenty years, the use of open content licenses has become increasingly and surprisingly popular. The use of such licences challenges the traditional incentive-based model of exclusive rights under copyright. Instead of providing a means to charge for the use of particular works, what seems important is mitigating against potential personal harm to the author and, in some cases, preventing non-consensual commercial exploitation. It is interesting in this context to observe the primacy of what are essentially moral rights over the exclusionary economic rights. The core elements of common open content licences map somewhat closely to continental conceptions of the moral rights of authorship. Most obviously, almost all free software and free culture licences require attribution of authorship. More interestingly, there is a tension between social norms developed in free software communities and those that have emerged in the creative arts over integrity and commercial exploitation. For programmers interested in free software, licence terms that prohibit commercial use or modification are almost completely inconsistent with the ideological and utilitarian values that underpin the movement. For those in the creative industries, on the other hand, non-commercial terms and, to a lesser extent, terms that prohibit all but verbatim distribution continue to play an extremely important role in the sharing of copyright material. While prohibitions on commercial use often serve an economic imperative, there is also a certain personal interest for many creators in avoiding harmful exploitation of their expression – an interest that has sometimes been recognised as forming a component of the moral right of integrity. One particular continental moral right – the right of withdrawal – is present neither in Australian law or in any of the common open content licences. Despite some marked differences, both free software and free culture participants are using contractual methods to articulate the norms of permissible sharing. Legal enforcement is rare and often prohibitively expensive, and the various communities accordingly rely upon shared understandings of acceptable behaviour. The licences that are commonly used represent a formalised expression of these community norms and provide the theoretically enforceable legal baseline that lends them legitimacy. The core terms of these licences are designed primarily to alleviate risk in sharing and minimise transaction costs in sharing and using copyright expression. Importantly, however, the range of available licences reflect different optional balances in the norms of creating and sharing material. Generally, it is possible to see that, stemming particularly from the US, open content licences are fundamentally important in providing a set of normatively accepted copyright balances that reflect the interests sought to be protected through moral rights regimes. As the cost of creation, distribution, storage, and processing of expression continues to fall towards zero, there are increasing incentives to adopt open content licences to facilitate wide distribution and reuse of creative expression. Thinking of these protocols not only as reducing transaction costs but of setting normative principles of participation assists in conceptualising the role of open content licences and the continuing tensions that permeate modern copyright law.
Resumo:
Cloud computing has emerged as a major ICT trend and has been acknowledged as a key theme of industry by prominent ICT organisations. However, one of the major challenges that face the cloud computing concept and its global acceptance is how to secure and protect the data that is the property of the user. The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to the regulations and laws that require data and operations to reside in specific geographic locations. Thus, data owners may need to ensure that their cloud providers do not compromise the SLA contract and move their data into another geographic location. This paper introduces an architecture for a new approach for geographic location assurance, which combines the proof of storage protocol (POS) and the distance-bounding protocol. This allows the client to check where their stored data is located, without relying on the word of the cloud provider. This architecture aims to achieve better security and more flexible geographic assurance within the environment of cloud computing.
Resumo:
Background: Pre-participation screening is commonly used to measure and assess potential intrinsic injury risk. The single leg squat is one such clinical screening measure used to assess lumbopelvic stability and associated intrinsic injury risk. With the addition of a decline board, the single leg decline squat (SLDS) has been shown to reduce ankle dorsiflexion restrictions and allowed greater sagittal plane movement of the hip and knee. On this basis, the SLDS has been employed in the Cricket Australia physiotherapy screening protocols as a measure of lumbopelvic control in the place of the more traditional single leg flat squat (SLFS). Previous research has failed to demonstrate which squatting technique allows for a more comprehensive assessment of lumbopelvic stability. Tenuous links are drawn between kinematics and hip strength measures within the literature for the SLS. Formal evaluation of subjective screening methods has also been suggested within the literature. Purpose: This study had several focal points namely 1) to compare the kinematic differences between the two single leg squatting conditions, primarily the five key kinematic variables fundamental to subjectively assess lumbopelvic stability; 2) determine the effect of ankle dorsiflexion range of motion has on squat kinematics in the two squat techniques; 3) examine the association between key kinematics and subjective physiotherapists’ assessment; and finally 4) explore the association between key kinematics and hip strength. Methods: Nineteen (n=19) subjects performed five SLDS and five SLFS on each leg while being filmed by an 8 camera motion analysis system. Four hip strength measures (internal/external rotation and abd/adduction) and ankle dorsiflexion range of motion were measured using a hand held dynamometer and a goniometer respectively on 16 of these subjects. The same 16 participants were subjectively assessed by an experienced physiotherapist for lumbopelvic stability. Paired samples t-tests were performed on the five predetermined kinematic variables to assess the differences between squat conditions. A Bonferroni correction for multiple comparisons was used which adjusted the significance value to p = 0.005 for the paired t-tests. Linear regressions were used to assess the relationship between kinematics, ankle range of motion and hip strength measures. Bivariate correlations between hip strength measures and kinematics and pelvic obliquity were employed to investigate any possible relationships. Results: 1) Significant kinematic differences between squats were observed in dominant (D) and non-dominant (ND) end of range hip external rotation (ND p = <0.001; D p = 0.004) and hip adduction kinematics (ND p = <0.001; D p = <0.001). With the mean angle, only the non-dominant leg observed significant differences in hip adduction (p = 0.001) and hip external rotation (p = <0.001); 2) Significant linear relationships were observed between clinical measures of ankle dorsiflexion and sagittal plane kinematic namely SLFS dominant ankle (p = 0.006; R2 = .429), SLFS non-dominant knee (p = 0.015; R2 = .352) and SLFS non-dominant ankle (p = 0.027; R2 = .305) kinematics. Only the dominant ankle (p = 0.020; R2 = .331) was found to have a relationship with the decline squat. 3) Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed. 4) For the non-dominant leg, external rotation strength and abduction strength were found to be significantly correlated with hip rotation kinematics (Newtons r = 0.458 p = 0.049; Normalised for bodyweight: r = 0.469; p = 0.043) and pelvic obliquity (normalised for bodyweight: r = 0.498 p = 0.030) respectively for the SLFS only. No significant relationships were observed in the dominant leg for either squat condition. Some elements of the hip strength screening protocols had linear relationships with kinematics of the lower limb, particularly the sagittal plane movements of the knee and ankle. Strength measures had tenuous associations with the subjective assessments of lumbopelvic stability with no significant relationships being observed; Discussion: The key finding of this study illustrated that kinematic differences can occur at the hip without significant kinematic differences at the knee as a result of the introduction of a decline board. Further observations reinforce the role of limited ankle dorsiflexion range of motion on sagittal plane movement of the hip and knee and in turn multiplanar kinematics of the lower limb. The kinematic differences between conditions have clinical implications for screening protocols that employ frontal plane movement of the knee as a guide for femoral adduction and rotation. Subjects who returned stronger hip strength measurements also appeared to squat deeper as characterised by differences in sagittal plane kinematics of the knee and ankle. Despite the aforementioned findings, the relationship between hip strength and lower limb kinematics remains largely tenuous in the assessment of the lumbopelvic stability using the SLS. The association between kinematics and the subjective measures of lumbopelvic stability also remain tenuous between and within SLS screening protocols. More functional measures of hip strength are needed to further investigate these relationships. Conclusion: The type of SLS (flat or decline) should be taken into account when screening for lumbopelvic stability. Changes to lower limb kinematics, especially around the hip and pelvis, were observed with the introduction of a decline board despite no difference in frontal plane knee movements. Differences in passive ankle dorsiflexion range of motion yielded variations in knee and ankle kinematics during a self-selected single leg squatting task. Clinical implications of removing posterior ankle restraints and using the knee as a guide to illustrate changes at the hip may result in inaccurate screening of lumbopelvic stability. The relationship between sagittal plane lower limb kinematics and hip strength may illustrate that self-selected squat depth may presumably be a useful predictor of the lumbopelvic stability. Further research in this area is required.
Resumo:
Road traffic accidents can be reduced by providing early warning to drivers through wireless ad hoc networks. When a vehicle detects an event that may lead to an imminent accident, the vehicle disseminates emergency messages to alert other vehicles that may be endangered by the accident. In many existing broadcast-based dissemination schemes, emergency messages may be sent to a large number of vehicles in the area and can be propagated to only one direction. This paper presents a more efficient context aware multicast protocol that disseminates messages only to endangered vehicles that may be affected by the emergency event. The endangered vehicles can be identified by calculating the interaction among vehicles based on their motion properties. To ensure fast delivery, the dissemination follows a routing path obtained by computing a minimum delay tree. The multicast protocol uses a generalized approach that can support any arbitrary road topology. The performance of the multicast protocol is compared with existing broadcast protocols by simulating chain collision accidents on a typical highway. Simulation results show that the multicast protocol outperforms the other protocols in terms of reliability, efficiency, and latency.
Resumo:
Exercise-induced muscle damage is an important topic in exercise physiology. However several aspects of our understanding of how muscles respond to highly stressful exercise remain unclear In the first section of this review we address the evidence that exercise can cause muscle damage and inflammation in otherwise healthy human skeletal muscles. We approach this concept by comparing changes in muscle function (i.e., the force-generating capacity) with the degree of leucocyte accumulation in muscle following exercise. In the second section, we explore the cytokine response to 'muscle-damaging exercise', primarily eccentric exercise. We review the evidence for the notion that the degree of muscle damage is related to the magnitude of the cytokine response. In the third and final section, we look at the satellite cell response to a single bout of eccentric exercise, as well as the role of the cyclooxygenase enzymes (COX1 and 2). In summary, we propose that muscle damage as evaluated by changes in muscle function is related to leucocyte accumulation in the exercised muscles. 'Extreme' exercise protocols, encompassing unaccustomed maximal eccentric exercise across a large range of motion, generally inflict severe muscle damage, inflammation and prolonged recovery (> 1 week). By contrast, exercise resembling regular athletic training (resistance exercise and downhill running) typically causes mild muscle damage (myofibrillar disruptions) and full recovery normally occurs within a few days. Large variation in individual responses to a given exercise should, however be expected. The link between cytokine and satellite cell responses and exercise-induced muscle damage is not so clear The systemic cytokine response may be linked more closely to the metabolic demands of exercise rather than muscle damage. With the exception of IL-6, the sources of systemic cytokines following exercise remain unclear The satellite cell response to severe muscle damage is related to regeneration, whereas the biological significance of satellite cell proliferation after mild damage or non-damaging exercise remains uncertain. The COX enzymes regulate satellite cell activity, as demonstrated in animal models; however the roles of the COX enzymes in human skeletal muscle need further investigation. We suggest using the term 'muscle damage' with care. Comparisons between studies and individuals must consider changes in and recovery of muscle force-generating capacity.
Resumo:
Fibroin extracted from silkworm cocoon silk provides an intriguing and potentially important biomaterial for corneal reconstruction. In the present chapter we outline our methods for producing a composite of two fibroin-based materials that supports the co-cultivation of human limbal epithelial (HLE) cells and human limbal stromal (HLS) cells. The resulting tissue substitute consists of a stratified epithelium overlying a three-dimensional arrangement of extracellular matrix components (principally ‘degummed’ fibroin fibers) and mesenchymal stromal cells. This tissue substitute is currently being evaluated as a tool for reconstructing the corneal limbus and corneal epithelium.
Resumo:
This paper describes observational research and verbal protocols methods, how these methods are applied and integrated within different contexts, and how they complement each other. The first case study focuses on nurses’ interaction during bandaging of patients’ lower legs. To maintain research rigor a triangulation approach was applied that links observations of current procedures, ‘talk-aloud’ protocol during interaction and retrospective protocol. Maps of interactions demonstrated that some nurses bandage more intuitively than others. Nurses who bandage intuitively assemble long sequences of bandaging actions while nurses who bandage less intuitively ‘focus-shift’ in between bandaging actions. Thus different levels of expertise have been identified. The second case study consists of two laboratory experiments. It focuses on analysing and comparing software and product design teams and how they approached a design problem. It is based on the observational and verbal data analysis. The coding scheme applied evolved during the analysis of the activity of each team and is identical for all teams. The structure of knowledge captured from the analysis of the design team maps of interaction is identified. The significance of this work is within its methodological approach. The maps of interaction are instrumental for understanding the activities and interactions of the people observed. By examining the maps of interaction, it is possible to draw conclusions about interactions, structure of knowledge captured and level of expertise. This research approach is transferable to other design domains. Designers will be able to transfer the interaction maps outcomes to systems and services they design.