946 resultados para web publishing
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
In this paper, we propose a search-based approach to join two tables in the absence of clean join attributes. Non-structured documents from the web are used to express the correlations between a given query and a reference list. To implement this approach, a major challenge we meet is how to efficiently determine the number of times and the locations of each clean reference from the reference list that is approximately mentioned in the retrieved documents. We formalize the Approximate Membership Localization (AML) problem and propose an efficient partial pruning algorithm to solve it. A study using real-word data sets demonstrates the effectiveness of our search-based approach, and the efficiency of our AML algorithm.
Resumo:
Highlights ► Provides a review of the history and development of locative media. ► Outlines different human-computer interaction techniques applied in locative media. ► Discusses how locative media applications have changed interaction affordances in and of physical spaces. ► Discusses practices of people in urban settings that evolved through these new affordances. ► Provides an overview on methods to investigate and elaborate design principles for future locative media.
Resumo:
Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.
Resumo:
Measuring the business value that Internet technologies deliver for organisations has proven to be a difficult and elusive task, given their complexity and increased embeddedness within the value chain. Yet, despite the lack of empirical evidence that links the adoption of Information Technology (IT) with increased financial performance, many organisations continue to adopt new technologies at a rapid rate. This is evident in the widespread adoption of Web 2.0 online Social Networking Services (SNSs) such as Facebook, Twitter and YouTube. These new Internet based technologies, widely used for social purposes, are being employed by organisations to enhance their business communication processes. However, their use is yet to be correlated with an increase in business performance. Owing to the conflicting empirical evidence that links prior IT applications with increased business performance, IT, Information Systems (IS), and E-Business Model (EBM) research has increasingly looked to broader social and environmental factors as a means for examining and understanding the broader influences shaping IT, IS and E-Business (EB) adoption behaviour. Findings from these studies suggest that organisations adopt new technologies as a result of strong external pressures, rather than a clear measure of enhanced business value. In order to ascertain if this is the case with the adoption of SNSs, this study explores how organisations are creating value (and measuring that value) with the use of SNSs for business purposes, and the external pressures influencing their adoption. In doing so, it seeks to address two research questions: 1. What are the external pressures influencing organisations to adopt SNSs for business communication purposes? 2. Are SNSs providing increased business value for organisations, and if so, how is that value being captured and measured? Informed by the background literature fields of IT, IS, EBM, and Web 2.0, a three-tiered theoretical framework is developed that combines macro-societal, social and technological perspectives as possible causal mechanisms influencing the SNS adoption event. The macro societal view draws on the concept of Castells. (1996) network society and the behaviour of crowds, herds and swarms, to formulate a new explanatory concept of the network vortex. The social perspective draws on key components of institutional theory (DiMaggio & Powell, 1983, 1991), and the technical view draws from the organising vision concept developed by Swanson and Ramiller (1997). The study takes a critical realist approach, and conducts four stages of data collection and one stage of data coding and analysis. Stage 1 consisted of content analysis of websites and SNSs of many organisations, to identify the types of business purposes SNSs are being used for. Stage 2 also involved content analysis of organisational websites, in order to identify suitable sample organisations in which to conduct telephone interviews. Stage 3 consisted of conducting 18 in-depth, semi-structured telephone interviews within eight Australian organisations from the Media/Publishing and Galleries, Libraries, Archives and Museum (GLAM) industries. These sample organisations were considered leaders in the use of SNSs technologies. Stage 4 involved an SNS activity count of the organisations interviewed in Stage 3, in order to rate them as either Advanced Innovator (AI) organisations, or Learning Focussed (LF) organisations. A fifth stage of data coding and analysis of all four data collection stages was conducted, based on the theoretical framework developed for the study, and using QSR NVivo 8 software. The findings from this study reveal that SNSs have been adopted by organisations for the purpose of increasing business value, and as a result of strong social and macro-societal pressures. SNSs offer organisations a wide range of value enhancing opportunities that have broader benefits for customers and society. However, measuring the increased business value is difficult with traditional Return On Investment (ROI) mechanisms, ascertaining the need for new value capture and measurement rationales, to support the accountability of SNS adoption practices. The study also identified the presence of technical, social and macro-societal pressures, all of which influenced SNS adoption by organisations. These findings contribute important theoretical insight into the increased complexity of pressures influencing technology adoption rationales by organisations, and have important practical implications for practice, by reflecting the expanded global online networks in which organisations now operate. The limitations of the study include the small number of sample organisations in which interviews were conducted, its limited generalisability, and the small range of SNSs selected for the study. However, these were compensated in part by the expertise of the interviewees, and the global significance of the SNSs that were chosen. Future research could replicate the study to a larger sample from different industries, sectors and countries. It could also explore the life cycle of SNSs in a longitudinal study, and map how the technical, social and macro-societal pressures are emphasised through stages of the life cycle. The theoretical framework could also be applied to other social fad technology adoption studies.
Resumo:
Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.
Resumo:
Originally launched in 2005 with a focus on user-generated content, YouTube has become the dominant platform for online video worldwide, and an important location for some of the most significant trends and controversies in the contemporary new-media environment. Throughout its very short history, it has also intersected with and been the focus of scholarly debates related to the politics, economics, and cultures of the new media—in particular, the “participatory turn” associated with “Web 2.0” business models’ partial reliance on amateur content and social networking. Given the slow pace of traditional scholarly publishing, the body of media and cultural studies literature substantively dedicated to describing and critically understanding YouTube’s texts, practices, and politics is still small, but it is growing steadily. At the same time, since its inception scholars from a wide range of disciplines and critical perspectives have found YouTube useful as a source of examples and case studies, some of which are included here; others have experimented directly with the scholarly and educational potential of the platform itself. For these reasons, although primarily based around the traditional publishing outlets for media, Internet, and cultural studies, this bibliography draws eclectically on a wide range of sources—including sources very closely associated with the web business literature and with the YouTube community itself.
Resumo:
This thesis explores the business environment for self-publishing musicians at the end of the 20th century and the start of the 21st century from theoretical and empirical standpoints. The exploration begins by asking three research questions: what are the factors affecting the sustainability of an Independent music business; how many of those factors can be directly influenced by an Independent musician in the day-to-day operations of their musical enterprise; and how can those factors be best manipulated to maximise the benefit generated from digital music assets? It answers these questions by considering the nature of value in the music business in light of theories of political economy, then quantitative and qualitative examinations of the nature of participation in the music business, and then auto-ethnographic approaches to the application of two technologically enabled tools available to Independent musicians. By analyzing the results of five different examinations of the topic it answers each research question with reference to four sets of recurring issues that affect the operations of a 21st century music business: the musicians’ personal characteristics, their ability to address their business’s informational needs; their ability to manage the relationships upon which their business depends; and their ability to resolve the remaining technological problems that confront them. It discusses ways in which Independent self-publishing musicians can and cannot deal with these four issues on a day-to-day basis and highlights aspects for which technological solutions do not exist as well as ways in which technology is not as effective as has been claimed. It then presents a self-critique and proposes some directions for further study before concluding by suggesting some common features of 21st century Independent music businesses. This thesis makes three contributions to knowledge. First, it provides a new understanding of the sources of musical value, shows how this explains changes in the music industries over the past 30 years, and provides a framework for predicting future developments in those industries. Second, it shows how the technological discontinuity that has occurred around the start of the 21st century has and has not affected the production and distribution of digital cultural artefacts and thus the attitudes, approaches, and business prospects of Independent musicians. Third, it argues for new understandings of two methods by which self-publishing musicians can grow a business using production methods that are only beginning to be more broadly understood: home studio recording and fan-sourced production. Developed from the perspective of working musicians themselves, this thesis identifies four sets of issues that determine the probable success of musicians’ efforts to adopt new technologies to capture the value of the musicians’ creativity and thereby foster growth that will sustain an Independent music business in the 21st century.
Resumo:
Emerging from the challenge to reduce energy consumption in buildings is the need for energy simulation to be used more effectively to support integrated decision making in early design. As a critical response to a Green Star case study, we present DEEPA, a parametric modeling framework that enables architects and engineers to work at the same semantic level to generate shared models for energy simulation. A cloud-based toolkit provides web and data services for parametric design software that automate the process of simulating and tracking design alternatives, by linking building geometry more directly to analysis inputs. Data, semantics, models and simulation results can be shared on the fly. This allows the complex relationships between architecture, building services and energy consumption to be explored in an integrated manner, and decisions to be made collaboratively.
Resumo:
With the growth of the Web, E-commerce activities are also becoming popular. Product recommendation is an effective way of marketing a product to potential customers. Based on a user’s previous searches, most recommendation methods employ two dimensional models to find relevant items. Such items are then recommended to a user. Further too many irrelevant recommendations worsen the information overload problem for a user. This happens because such models based on vectors and matrices are unable to find the latent relationships that exist between users and searches. Identifying user behaviour is a complex process, and usually involves comparing searches made by him. In most of the cases traditional vector and matrix based methods are used to find prominent features as searched by a user. In this research we employ tensors to find relevant features as searched by users. Such relevant features are then used for making recommendations. Evaluation on real datasets show the effectiveness of such recommendations over vector and matrix based methods.
Resumo:
The growing importance and need of data processing for information extraction is vital for Web databases. Due to the sheer size and volume of databases, retrieval of relevant information as needed by users has become a cumbersome process. Information seekers are faced by information overloading - too many result sets are returned for their queries. Moreover, too few or no results are returned if a specific query is asked. This paper proposes a ranking algorithm that gives higher preference to a user’s current search and also utilizes profile information in order to obtain the relevant results for a user’s query.
Resumo:
Search log data is multi dimensional data consisting of number of searches of multiple users with many searched parameters. This data can be used to identify a user’s interest in an item or object being searched. Identifying highest interests of a Web user from his search log data is a complex process. Based on a user’s previous searches, most recommendation methods employ two-dimensional models to find relevant items. Such items are then recommended to a user. Two-dimensional data models, when used to mine knowledge from such multi dimensional data may not be able to give good mappings of user and his searches. The major problem with such models is that they are unable to find the latent relationships that exist between different searched dimensions. In this research work, we utilize tensors to model the various searches made by a user. Such high dimensional data model is then used to extract the relationship between various dimensions, and find the prominent searched components. To achieve this, we have used popular tensor decomposition methods like PARAFAC, Tucker and HOSVD. All experiments and evaluation is done on real datasets, which clearly show the effectiveness of tensor models in finding prominent searched components in comparison to other widely used two-dimensional data models. Such top rated searched components are then given as recommendation to users.
Resumo:
We propose to use the Tensor Space Modeling (TSM) to represent and analyze the user’s web log data that consists of multiple interests and spans across multiple dimensions. Further we propose to use the decomposition factors of the Tensors for clustering the users based on similarity of search behaviour. Preliminary results show that the proposed method outperforms the traditional Vector Space Model (VSM) based clustering.
Resumo:
Web 2.0 technology and concepts are being used increasingly by organisations to enhance knowledge, efficiency, engagement and reputation. Understanding the concepts of Web 2.0, its characteristics, and how the technology and concepts can be adopted, is essential to successfully reap the potential benefits. In fact, there is a debate about using the Web 2.0 idiom to refer to the concept behind it; however, this term is widely used in literature as well as in industry. In this paper, the definition of Web 2.0 technology, its characteristics and the attributes, will be presented. In addition, the adoption of such technology is further explored through the presentation of two separate case examples of Web 2.0 being used: to enhance an enterprise; and to enhance university teaching. The similarities between these implementations are identified and discussed, including how the findings point to generic principles of adoption.
Resumo:
The high-pressure, cross-cultural, cross-factional and frequently cross-national nature of contemporary negotiation means that there are a number of clements potentially hampering efforts to achieve successful negotiation outcomes from face-to-face interactions. These hindrances include: resource scarcity (for example, finances, technology and facilities), time scarcity, geographical separation, lack of a COnl1110n language and an inability to Inaintain a consistent ongoing dialogue.