943 resultados para multi-user setting
Resumo:
The theoretical foundation of this study comes from the significant recurrence throughout the leadership literature of two distinct behaviors, task orientation and relationship orientation. Task orientation and relationship orientation are assumed to be generic behaviors, which are universally observed and applied in organizations, even though they may be uniquely enacted in organizations across cultures. The lack of empirical evidence supporting these assumptions provided the impetus to hypothetically develop and empirically confirm the universal application of task orientation and relationship orientation and the generalizability of their measurement in a cross-cultural setting. Task orientation and relationship orientation are operationalized through consideration and initiation of structure, two well-established theoretical leadership constructs. Multiple-group mean and covariance structures (MACS) analyses are used to simultaneously validate the generalizability of the two hypothesized constructs across the 12 cultural groups and to assess whether the similarities and differences discovered are measurement and scaling artifacts or reflect true cross-cultural differences. The data were collected by the author and others as part of a larger international research project. The data are comprised of 2341 managers from 12 countries/regions. The results provide compelling evidence that task orientation and relationship orientation, reliably and validly operationalized through consideration and initiation of structure, are generalizable across the countries/regions sampled. But the results also reveal significant differences in the perception of these behaviors, suggesting that some aspects of task orientation and relationship orientation are strongly affected by cultural influences. These (similarities and) differences reflect directly interpretable, error-free effects among the constructs at the behavioral level. Thus, task orientation and relationship orientation can demonstrate different relations among cultures, yet still be defined equivalently across the 11 cultures studied. The differences found in this study are true differences and may contain information about cultural influences characterizing each cultural context (i.e. group). The nature of such influences should be examined before the results can be meaningfully interpreted. To examine the effects of cultural characteristics on the constructs, additional hypotheses on the constructs' latent parameters can be tested across groups. Construct-level tests are illustrated in hypothetical examples in light of the study's results. The study contributes significantly to the theoretical understanding of the nature and generalizability of psychological constructs. The theoretical and practical implications of embedding context into a unified theory of task orientated and relationship oriented leader behavior are proposed. Limitations and contributions are also discussed. ^
Resumo:
With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. ^ The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.^
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.
Resumo:
Multiuser selection scheduling concept has been recently proposed in the literature in order to increase the multiuser diversity gain and overcome the significant feedback requirements for the opportunistic scheduling schemes. The main idea is that reducing the feedback overhead saves per-user power that could potentially be added for the data transmission. In this work, the authors propose to integrate the principle of multiuser selection and the proportional fair scheduling scheme. This is aimed especially at power-limited, multi-device systems in non-identically distributed fading channels. For the performance analysis, they derive closed-form expressions for the outage probabilities and the average system rate of the delay-sensitive and the delay-tolerant systems, respectively, and compare them with the full feedback multiuser diversity schemes. The discrete rate region is analytically presented, where the maximum average system rate can be obtained by properly choosing the number of partial devices. They optimise jointly the number of partial devices and the per-device power saving in order to maximise the average system rate under the power requirement. Through the authors’ results, they finally demonstrate that the proposed scheme leveraging the saved feedback power to add for the data transmission can outperform the full feedback multiuser diversity, in non-identical Rayleigh fading of devices’ channels.
Physical Layer Security with Threshold-Based Multiuser Scheduling in Multi-antenna Wireless Networks
Resumo:
In this paper, we consider a multiuser downlink wiretap network consisting of one base station (BS) equipped with AA antennas, NB single-antenna legitimate users, and NE single-antenna eavesdroppers over Nakagami-m fading channels. In particular, we introduce a joint secure transmission scheme that adopts transmit antenna selection (TAS) at the BS and explores threshold-based selection diversity (tSD) scheduling over legitimate users to achieve a good secrecy performance while maintaining low implementation complexity. More specifically, in an effort to quantify the secrecy performance of the considered system, two practical scenarios are investigated, i.e., Scenario I: the eavesdropper’s channel state information (CSI) is unavailable at the BS, and Scenario II: the eavesdropper’s CSI is available at the BS. For Scenario I, novel exact closed-form expressions of the secrecy outage probability are derived, which are valid for general networks with an arbitrary number of legitimate users, antenna configurations, number of eavesdroppers, and the switched threshold. For Scenario II, we take into account the ergodic secrecy rate as the principle performance metric, and derive novel closed-form expressions of the exact ergodic secrecy rate. Additionally, we also provide simple and asymptotic expressions for secrecy outage probability and ergodic secrecy rate under two distinct cases, i.e., Case I: the legitimate user is located close to the BS, and Case II: both the legitimate user and eavesdropper are located close to the BS. Our important findings reveal that the secrecy diversity order is AAmA and the slope of secrecy rate is one under Case I, while the secrecy diversity order and the slope of secrecy rate collapse to zero under Case II, where the secrecy performance floor occurs. Finally, when the switched threshold is carefully selected, the considered scheduling scheme outperforms other well known existing schemes in terms of the secrecy performance and complexity tradeoff
Resumo:
Previous research has highlighted the importance of positive physical activity (PA) behaviors during childhood to promote sustained active lifestyles throughout the lifespan (Telama et al. 2005; 2014). It is in this context that the role of schools and teachers in facilitating PA education is promoted. Research suggests that teachers play an important role in the attitudes of children towards PA (Figley 1985) and schools may be an efficient vehicle for PA provision and promotion (McGinnis, Kanner and DeGraw, 1991; Wechsler, Deveraux, Davis and Collins, 2000). Yet despite consensus that schools represent an ideal setting from which to ‘reach’ young people (Department of Health and Human Services, UK, 2012) there remains conceptual (e.g. multi-component intervention) and methodological (e.g. duration, intensity, family involvement) ambiguity regarding the mechanisms of change claimed by PA intervention programmes. This may, in part, contribute to research findings that suggest that PA interventions have had limited impact on children’s overall activity levels and thereby limited impact in reducing children’s metabolic health (Metcalf, Henley & Wilkin, 2012). A marked criticism of the health promotion field has been the focus on behavioural change while failing to acknowledge the impact of context in influencing health outcomes (Golden & Earp, 2011). For years, the trans-theoretical model of behaviour change has been ‘the dominant model for health behaviour change’ (Armitage, 2009); this model focusses primarily on the individual and the psychology of the change process. Arguably, this model is limited by the individual’s decision-making ability and degree of self-efficacy in order to achieve sustained behavioural change and does not take account of external factors that may hinder their ability to realise change. Similar to the trans-theoretical model, socio-ecological models identify the individual at the focal point of change but also emphasises the importance of connecting multiple impacting variables, in particular, the connections between the social environment, the physical environment and public policy in facilitating behavioural change (REF). In this research, a social-ecological framework was used to connect the ways a PA intervention programme had an impact (or not) on participants, and to make explicit the foundational features of the programme that facilitated positive change. In this study, we examined the evaluation of a multi-agency approach to a PA intervention programme which aimed to increase physical activity, and awareness of the importance of physical activity to key stage 2 (age 7-12) pupils in three UK primary schools. The agencies involved were the local health authority, a community based charitable organisation, a local health administrative agency, and the city school district. In examining the impact of the intervention, we adopted a process evaluation model in order to better understand the mechanisms and context that facilitated change. Therefore, the aim of this evaluation was to describe the provision, process and impact of the intervention by 1) assessing changes in physical activity levels 2) assessing changes in the student’s attitudes towards physical activity, 3) examining student’s perceptions of the child size fitness equipment in school and their likelihood of using the equipment outside of school and 4) exploring staff perceptions, specifically the challenges and benefits, of facilitating equipment based exercise sessions in the school environment. Methodology, Methods, Research Instruments or Sources Used Evaluation of the intervention was designed as a matched-control study and was undertaken over a seven-month period. The school-based intervention involved 3 intervention schools (n =436; 224 boys) and one control school (n=123; 70 boys) in a low socioeconomic and multicultural urban setting. The PA intervention was separated into two phases: a motivation DVD and 10 days of circuit based exercise sessions (Phase 1) followed by a maintenance phase (Phase 2) that incorporated a PA reward program and the use of specialist kid’s gym equipment located at each school for a period of 4 wk. Outcome measures were measured at baseline (January) and endpoint (July; end of academic school year) using reliable and valid self-report measures. The children’s attitudes towards PA were assessed using the Children’s Attitudes towards Physical Activity (CATPA) questionnaire. The Physical Activity Questionnaire for Children (PAQ-C), a 7-day recall questionnaire, was used to assess PA levels over a school week. A standardised test battery (Fitnessgram®) was used to assess cardiovascular fitness, body composition, muscular strength and endurance, and flexibility. After the 4 wk period, similar kid’s equipment was available for general access at local community facilities. The control school did not receive any of the interventions. All physical fitness tests and PA questionnaires were administered and collected prior to the start of the intervention (January) and following the intervention period (July) by an independent evaluation team. Evaluation testing took place at the individual schools over 2-3 consecutive days (depending on the number of children to be tested at the school). Staff (n=19) and student perceptions (n = 436) of the child sized fitness equipment were assessed via questionnaires post-intervention. Students completed a questionnaire to assess enjoyment, usage, ease of use and equipment assess and usage in the community. A questionnaire assessed staff perceptions on the delivery of the exercise sessions, classroom engagement and student perceptions. Conclusions, Expected Outcomes or Findings Findings showed that both the intervention (16.4%) and control groups increased their PAQ-C score by post-intervention (p < 0.05); with the intervention (17.8%) and control (21.3%) boys showing the greatest increase in physical activity levels. At post-intervention, there was a 5.5% decline in the intervention girls’ attitudes toward PA in the aesthetic subdomains (p = 0.009); whereas the control boys had an increase in positive attitudes in the health domain (p = 0.003). No significant differences in attitudes towards physical activity were observed in any other domain for either group at post-intervention (p > 0.05). The results of the equipment questionnaire, 96% of the children stated they enjoyed using the equipment and would like to use the equipment again in the future; however at post-intervention only 27% reported using the equipment outside of school in the last 7 days. Students identified the ski walker (34%) and cycle (32%) as their favorite pieces of equipment; with the single joint exercises such as leg extension and bicep/tricep machine (<3%) as their least favorite. Key themes from staff were that the equipment sessions were enjoyable, a novel activity, children felt very grown-up, and the activity was linked to a real fitness experience. They also expressed the need for more support to deliver the sessions and more time required for each session. Findings from this study suggest that a more integrated approach within the various agencies is required, particularly more support to increase teachers pedagogical content knowledge in physical activity instruction which is age appropriate. Future recommendations for successful implementation include sufficient time period for all students to access and engage with the equipment; increased access and marketing of facilities to parents within the local community, and professional teacher support strategies to facilitate the exercise sessions.
Resumo:
Recent paradigms in wireless communication architectures describe environments where nodes present a highly dynamic behavior (e.g., User Centric Networks). In such environments, routing is still performed based on the regular packet-switched behavior of store-and-forward. Albeit sufficient to compute at least an adequate path between a source and a destination, such routing behavior cannot adequately sustain the highly nomadic lifestyle that Internet users are today experiencing. This thesis aims to analyse the impact of the nodes’ mobility on routing scenarios. It also aims at the development of forwarding concepts that help in message forwarding across graphs where nodes exhibit human mobility patterns, as is the case of most of the user-centric wireless networks today. The first part of the work involved the analysis of the mobility impact on routing, and we found that node mobility significance can affect routing performance, and it depends on the link length, distance, and mobility patterns of nodes. The study of current mobility parameters showed that they capture mobility partially. The routing protocol robustness to node mobility depends on the routing metric sensitivity to node mobility. As such, mobility-aware routing metrics were devised to increase routing robustness to node mobility. Two categories of routing metrics proposed are the time-based and spatial correlation-based. For the validation of the metrics, several mobility models were used, which include the ones that mimic human mobility patterns. The metrics were implemented using the Network Simulator tool using two widely used multi-hop routing protocols of Optimized Link State Routing (OLSR) and Ad hoc On Demand Distance Vector (AODV). Using the proposed metrics, we reduced the path re-computation frequency compared to the benchmark metric. This means that more stable nodes were used to route data. The time-based routing metrics generally performed well across the different node mobility scenarios used. We also noted a variation on the performance of the metrics, including the benchmark metric, under different mobility models, due to the differences in the node mobility governing rules of the models.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
Choosing a single similarity threshold for cutting dendrograms is not sufficient for performing hierarchical clustering analysis of heterogeneous data sets. In addition, alternative automated or semi-automated methods that cut dendrograms in multiple levels make assumptions about the data in hand. In an attempt to help the user to find patterns in the data and resolve ambiguities in cluster assignments, we developed MLCut: a tool that provides visual support for exploring dendrograms of heterogeneous data sets in different levels of detail. The interactive exploration of the dendrogram is coordinated with a representation of the original data, shown as parallel coordinates. The tool supports three analysis steps. Firstly, a single-height similarity threshold can be applied using a dynamic slider to identify the main clusters. Secondly, a distinctiveness threshold can be applied using a second dynamic slider to identify “weak-edges” that indicate heterogeneity within clusters. Thirdly, the user can drill-down to further explore the dendrogram structure - always in relation to the original data - and cut the branches of the tree at multiple levels. Interactive drill-down is supported using mouse events such as hovering, pointing and clicking on elements of the dendrogram. Two prototypes of this tool have been developed in collaboration with a group of biologists for analysing their own data sets. We found that enabling the users to cut the tree at multiple levels, while viewing the effect in the original data, is a promising method for clustering which could lead to scientific discoveries.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. Agent-Based Simulation (ABS), one way of using intelligent agents, carries great potential for progressing our understanding of management practices and how they link to retail performance. We have developed simulation models based on research by a multi-disciplinary team of economists, work psychologists and computer scientists. We will discuss our experiences of implementing these concepts working with a well-known retail department store. There is no doubt that management practices are linked to the performance of an organisation (Reynolds et al., 2005; Wall & Wood, 2005). Best practices have been developed, but when it comes down to the actual application of these guidelines considerable ambiguity remains regarding their effectiveness within particular contexts (Siebers et al., forthcoming a). Most Operational Research (OR) methods can only be used as analysis tools once management practices have been implemented. Often they are not very useful for giving answers to speculative ‘what-if’ questions, particularly when one is interested in the development of the system over time rather than just the state of the system at a certain point in time. Simulation can be used to analyse the operation of dynamic and stochastic systems. ABS is particularly useful when complex interactions between system entities exist, such as autonomous decision making or negotiation. In an ABS model the researcher explicitly describes the decision process of simulated actors at the micro level. Structures emerge at the macro level as a result of the actions of the agents and their interactions with other agents and the environment. We will show how ABS experiments can deal with testing and optimising management practices such as training, empowerment or teamwork. Hence, questions such as “will staff setting their own break times improve performance?” can be investigated.
Resumo:
International audience
Resumo:
Part 18: Optimization in Collaborative Networks
Resumo:
The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.