939 resultados para context aware
Resumo:
Entrepreneurship, understood as the autonomous, effective pursuit of opportunities regardless of resources, is currently subject to a multitude of interests, expectations, and facilitation efforts. On the one hand, such entrepreneurial agency has broad appeal to individuals in Western market democracies and resonates with their longing for an autonomous, personally tailored, meaningful, and materially rewarding way of life. On the other hand, entrepreneurship represents a tempting and increasingly popular means of governance and policy making, and thus a model for the re-organization of a variety of societal sectors. This study focuses on the diffusion and reception of entrepreneurship discourse in the context of farming and agriculture, where pressures to adopt entrepreneurial orientations have been increasingly pronounced while, on the other hand, the context of farming has historically enjoyed state protection and adhered to principles that seem at odds with aspects of individualistic entrepreneurship discourse . The study presents an interpretation of the psychologically and politically appealing uses of the notion of entrepreneurial agency , reviews the historical and political background of the current situation of farming and agriculture with regard to entrepreneurship, and examines their relationships in four empirical studies. The study follows and develops a social psychological, situated relational approach that guides the qualitative analyses and interpretations of the empirical studies. Interviews with agents from the farm sector aim to stimulate evaluative responses and comments on the idea of entrepreneurship on farms. Analysis of the interview talk, in turn, detects the variety of evaluative responses and argumentative contexts with which the interviewees relate themselves to the entrepreneurship discourse and adopt, use, resist, or reject it. The study shows that despite the pressures towards entrepreneurialism, the diffusion of entrepreneurship discourse and the construction of entrepreneurial agency in farm context encounter many obstacles. These obstacles can be variably related to aspects dealing with the individual agent, the action situation, the characteristics of the action itself, or to the broader social, institutional and cultural context. Many aspects of entrepreneurial agency, such as autonomy, personal initiative and achievement orientation, are nevertheless familiar to farmers and are eagerly related to one s own farming activities. The idea of entrepreneurship is thus rarely rejected outright. The findings highlight the relational and situational preconditions for the construction of entrepreneurial agency in the farm context: When agents demonstrate entrepreneurial agency, they do so by drawing on available and accessed relational resources characteristic of their action context. Likewise, when agents fail or are reluctant to demonstrate entrepreneurial agency, they nevertheless actively account for their situation and demonstrate personal agency by drawing on the relational resources available to them.
Resumo:
We introduce a variation density function that profiles the relationship between multiple scalar fields over isosurfaces of a given scalar field. This profile serves as a valuable tool for multifield data exploration because it provides the user with cues to identify interesting isovalues of scalar fields. Existing isosurface-based techniques for scalar data exploration like Reeb graphs, contour spectra, isosurface statistics, etc., study a scalar field in isolation. We argue that the identification of interesting isovalues in a multifield data set should necessarily be based on the interaction between the different fields. We demonstrate the effectiveness of our approach by applying it to explore data from a wide variety of applications.
Resumo:
Although previous research has recognised adaptation as a central aspect in relationships, the adaptation of the sales process to the buying process has not been studied. Furthermore, the linking of relationship orientation as mindset with adaptation as a strategy and forming the means has not been elaborated upon in previous research. Adaptation in the context of relationships has mostly been studied in relationship marketing. In sales and sales management research, adaptation has been studied with reference to personal selling. This study focuses on adaptation of the sales process to strategically match it to the buyer’s mindset and buying process. The purpose of this study is to develop a framework for strategic adaptation of the seller’s sales process to match the buyer’s buying process in a business-to-business context to make sales processes more relationship oriented. In order to arrive at a holistic view of adaptation of the sales process during relationship initiation, both the seller and buyer are included in an extensive case analysed in the study. However, the selected perspective is primarily that of the seller, and the level focused on is that of the sales process. The epistemological perspective adopted is constructivism. The study is a qualitative one applying a retrospective case study, where the main sources of information are in-depth semi-structured interviews with key informants representing the counterparts at the seller and the buyer in the software development and telecommunications industries. The main theoretical contributions of this research involve targeting a new area in the crossroads of relationship marketing, sales and sales management, and buying and purchasing by studying adaptation in a business-to-business context from a new perspective. Primarily, this study contributes to research in sales and sales management with reference to relationship orientation and strategic sales process adaptation. This research fills three research gaps. Firstly, linking the relationship orientation mindset with adaptation as strategy. Secondly, extending adaptation in sales from adaptation in selling to strategic adaptation of the sales process. Thirdly, extending adaptation to include facilitation of adaptation. The approach applied in the study, systematic combining, is characterised by continuously moving back and forth between theory and empirical data. The framework that emerges, in which linking mindset with strategy with mindset and means forms a central aspect, includes three layers: purchasing portfolio, seller-buyer relationship orientation, and strategic sales process adaptation. Linking the three layers enables an analysis of where sales process adaptation can make a contribution. Furthermore, implications for managerial use are demonstrated, for example how sellers can avoid the ‘trap’ of ad-hoc adaptation. This includes involving the company, embracing the buyer’s purchasing portfolio, understanding the current position that the seller has in this portfolio, and possibly educating the buyer about advantages of adopting a relationship-oriented approach.
Resumo:
Many next-generation distributed applications, such as grid computing, require a single source to communicate with a group of destinations. Traditionally, such applications are implemented using multicast communication. A typical multicast session requires creating the shortest-path tree to a fixed number of destinations. The fundamental issue in multicasting data to a fixed set of destinations is receiver blocking. If one of the destinations is not reachable, the entire multicast request (say, grid task request) may fail. Manycasting is a generalized variation of multicasting that provides the freedom to choose the best subset of destinations from a larger set of candidate destinations. We propose an impairment-aware algorithm to provide manycasting service in the optical layer, specifically OBS. We compare the performance of our proposed manycasting algorithm with traditional multicasting and multicast with over provisioning. Our results show a significant improvement in the blocking probability by implementing optical-layer manycasting.
Resumo:
Clustered VLIW architectures solve the scalability problem associated with flat VLIW architectures by partitioning the register file and connecting only a subset of the functional units to a register file. However, inter-cluster communication in clustered architectures leads to increased leakage in functional components and a high number of register accesses. In this paper, we propose compiler scheduling algorithms targeting two previously ignored power-hungry components in clustered VLIW architectures, viz., instruction decoder and register file. We consider a split decoder design and propose a new energy-aware instruction scheduling algorithm that provides 14.5% and 17.3% benefit in the decoder power consumption on an average over a purely hardware based scheme in the context of 2-clustered and 4-clustered VLIW machines. In the case of register files, we propose two new scheduling algorithms that exploit limited register snooping capability to reduce extra register file accesses. The proposed algorithms reduce register file power consumption on an average by 6.85% and 11.90% (10.39% and 17.78%), respectively, along with performance improvement of 4.81% and 5.34% (9.39% and 11.16%) over a traditional greedy algorithm for 2-clustered (4-clustered) VLIW machine. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Increasing network lifetime is important in wireless sensor/ad-hoc networks. In this paper, we are concerned with algorithms to increase network lifetime and amount of data delivered during the lifetime by deploying multiple mobile base stations in the sensor network field. Specifically, we allow multiple mobile base stations to be deployed along the periphery of the sensor network field and develop algorithms to dynamically choose the locations of these base stations so as to improve network lifetime. We propose energy efficient low-complexity algorithms to determine the locations of the base stations; they include i) Top-K-max algorithm, ii) maximizing the minimum residual energy (Max-Min-RE) algorithm, and iii) minimizing the residual energy difference (MinDiff-RE) algorithm. We show that the proposed base stations placement algorithms provide increased network lifetimes and amount of data delivered during the network lifetime compared to single base station scenario as well as multiple static base stations scenario, and close to those obtained by solving an integer linear program (ILP) to determine the locations of the mobile base stations. We also investigate the lifetime gain when an energy aware routing protocol is employed along with multiple base stations.
Resumo:
The decision to patent a technology is a difficult one to make for the top management of any organization. The expected value that the patent might deliver in the market is an important factor that impacts this judgement. Earlier researchers have suggested that patent prices are better indicators of value of a patent and that auction prices are the best way of determining value. However, the lack of public data on pricing has prevented research on understanding the dynamics of patent pricing. Our paper uses singleton patent auction price data of Ocean Tomo LLC to study the prices of patents. We describe price characteristics of these patents. The price of these patents was correlated with their age, and a significant correlation was found. A price - age matrix was developed and we describe the price characteristics of patents using four quadrants of the matrix, namely young and old patents with low and high prices. We also found that patents owned by small firms get transacted more often and inventor owned patents attracted a better price than assignee owned patents.
Resumo:
Service discovery is vital in ubiquitous applications, where a large number of devices and software components collaborate unobtrusively and provide numerous services without user intervention. Existing service discovery schemes use a service matching process in order to offer services of interest to the users. Potentially, the context information of the users and surrounding environment can be used to improve the quality of service matching. To make use of context information in service matching, a service discovery technique needs to address certain challenges. Firstly, it is required that the context information shall have unambiguous representation. Secondly, the devices in the environment shall be able to disseminate high level and low level context information seamlessly in the different networks. And thirdly, dynamic nature of the context information be taken into account. We propose a C-IOB(Context-Information, Observation and Belief) based service discovery model which deals with the above challenges by processing the context information and by formulating the beliefs based on the observations. With these formulated beliefs the required services will be provided to the users. The method has been tested with a typical ubiquitous museum guide application over different cases. The simulation results are time efficient and quite encouraging.
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
The Java Memory Model (JMM) provides a semantics of Java multithreading for any implementation platform. The JMM is defined in a declarative fashion with an allowed program execution being defined in terms of existence of "commit sequences" (roughly, the order in which actions in the execution are committed). In this work, we develop OpMM, an operational under-approximation of the JMM. The immediate motivation of this work lies in integrating a formal specification of the JMM with software model checkers. We show how our operational memory model description can be integrated into a Java Path Finder (JPF) style model checker for Java programs.
Resumo:
Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.