991 resultados para LACUNY Web Team
Resumo:
Increasingly scientists are using collections of software tools in their research. These tools are typically used in concert, often necessitating laborious and error-prone manual data reformatting and transfer. We present an intuitive workflow environment to support scientists with their research. The workflow, GPFlow, wraps legacy tools, presenting a high level, interactive web-based front end to scientists. The workflow backend is realized by a commercial grade workflow engine (Windows Workflow Foundation). The workflow model is inspired by spreadsheets and is novel in its support for an intuitive method of interaction enabling experimentation as required by many scientists, e.g. bioinformaticians. We apply GPFlow to two bioinformatics experiments and demonstrate its flexibility and simplicity.
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
In this paper, we propose a search-based approach to join two tables in the absence of clean join attributes. Non-structured documents from the web are used to express the correlations between a given query and a reference list. To implement this approach, a major challenge we meet is how to efficiently determine the number of times and the locations of each clean reference from the reference list that is approximately mentioned in the retrieved documents. We formalize the Approximate Membership Localization (AML) problem and propose an efficient partial pruning algorithm to solve it. A study using real-word data sets demonstrates the effectiveness of our search-based approach, and the efficiency of our AML algorithm.
Resumo:
Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.
Resumo:
This ALTC Teaching Fellowship aimed to establish Guiding Principles for Library and Information Science Education 2.0. The aim was achieved by (i) identifying the current and anticipated skills and knowledge required by successful library and information science (LIS) professionals in the age of web 2.0 (and beyond), (ii) establishing the current state of LIS education in Australia in supporting the development of librarian 2.0, and in doing so, identify models of best practice.
The fellowship has contributed to curriculum renewal in the LIS profession. It has helped to ensure that LIS education in Australia continues to meet the changing skills and knowledge requirements of the profession it supports. It has also provided a vehicle through which LIS professionals and LIS educators may find opportunities for greater collaboration and more open communication. This will help bridge the gap between LIS theory and practice and will foster more authentic engagement between LIS education and other parts of the LIS industry in the education of the next generation of professionals. Through this fellowship the LIS discipline has become a role model for other disciplines who will be facing similar issues in the coming years.
Eighty-one members of the Australian LIS profession participated in a series of focus groups exploring the current and anticipated skills and knowledge needed by the LIS professional in the web 2.0 world and beyond. Whilst each focus group tended to draw on specific themes of interest to that particular group of people, there was a great deal of common ground. Eight key themes emerged: technology, learning and education, research or evidence-based practice, communication, collaboration and team work, user focus, business savvy and personal traits.
It was acknowledged that the need for successful LIS professionals to possess transferable skills and interpersonal attributes was not new. It was noted however that the speed with which things are changing in the web 2.0 world was having a significant impact and that this faster pace is placing a new and unexpected emphasis on the transferable skills and knowledge. It was also acknowledged that all librarians need to possess these skills, knowledge and attributes and not just the one or two role models who lead the way.
The most interesting finding however was that web 2.0, library 2.0 and librarian 2.0 represented a ‘watershed’ for the LIS profession. Almost all the focus groups spoke about how they are seeing and experiencing a culture change in the profession. Librarian 2.0 requires a ‘different mindset or attitude’. The Levels of Perspective model by Daniel Kim provides one lens by which to view this finding. The focus group findings suggest that we are witnessing a re-awaking of the Australian LIS profession as it begins to move towards the higher levels of Kim’s model (ie mental models, vision).
Thirty-six LIS educators participated in telephone interviews aimed at exploring the current state of LIS education in supporting the development of librarian 2.0. Skills and knowledge of LIS professionals in a web 2.0 world that were identified and discussed by the LIS educators mirrored those highlighted in the focus group discussions with LIS professionals. Similarly it was noted that librarian 2.0 needed a focus less on skills and knowledge and more on attitude. However, whilst LIS professionals felt that there was a paradigm shift within the profession. LIS educators did not speak with one voice on this matter with quite a number of the educators suggesting that this might be ‘overstating it a bit’. This study provides evidence for “disparate viewpoints” (Hallam, 2007) between LIS educators and LIS professionals that can have a significant implications for the future of not just LIS professional education specifically but for the profession generally.
Library and information science education 2.0: guiding principles and models of best practice 1
Inviting the LIS academics to discuss how their teaching and learning activities support the development of librarian 2.0 was a core part of the interviews conducted. The strategies used and the challenges faced by LIS educators in developing their teaching and learning approaches to support the formation of librarian 2.0 are identified and discussed. A core part of the fellowship was the identification of best practice examples on how LIS educators were developing librarian 2.0. Twelve best practice examples were identified. Each educator was recorded discussing his or her approach to teaching and learning. Videos of these interviews are available via the Fellowship blog at
Resumo:
It is important to promote a sustainable development approach to ensure that economic, environmental and social developments are maintained in balance. Sustainable development and its implications are not just a global concern, it also affects Australia. In particular, rural Australian communities are facing various economic, environmental and social challenges. Thus, the need for sustainable development in rural regions is becoming increasingly important. To promote sustainable development, proper frameworks along with the associated tools optimised for the specific regions, need to be developed. This will ensure that the decisions made for sustainable development are evidence based, instead of subjective opinions. To address these issues, Queensland University of Technology (QUT), through an Australian Research Council (ARC) linkage grant, has initiated research into the development of a Rural Statistical Sustainability Framework (RSSF) to aid sustainable decision making in rural Queensland. This particular branch of the research developed a decision support tool that will become the integrating component of the RSSF. This tool is developed on the web-based platform to allow easy dissemination, quick maintenance and to minimise compatibility issues. The tool is developed based on MapGuide Open Source and it follows the three-tier architecture: Client tier, Web tier and the Server tier. The developed tool is interactive and behaves similar to a familiar desktop-based application. It has the capability to handle and display vector-based spatial data and can give further visual outputs using charts and tables. The data used in this tool is obtained from the QUT research team. Overall the tool implements four tasks to help in the decision-making process. These are the Locality Classification, Trend Display, Impact Assessment and Data Entry and Update. The developed tool utilises open source and freely available software and accounts for easy extensibility and long-term sustainability.
Resumo:
Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.
Resumo:
This study explores the relationship between new venture team composition and new venture persistence and performance over time. We examine the team characteristics of a 5-year panel study of 202 new venture teams and new venture performance. Our study makes two contributions. First, we extend earlier research concerning homophily theories of the prevalence of homogeneous teams. Using structural event analysis we demonstrate that team members’ start-up experience is important in this context. Second, we attempt to reconcile conflicting evidence concerning the influence of team homogeneity on performance by considering the element of time. We hypothesize that higher team homogeneity is positively related to short term outcomes, but is less effective in the longer term. Our results confirm a difference over time. We find that more homogeneous teams are less likely to be higher performing in the long term. However, we find no relationship between team homogeneity and short-term performance outcomes.
Resumo:
With the growth of the Web, E-commerce activities are also becoming popular. Product recommendation is an effective way of marketing a product to potential customers. Based on a user’s previous searches, most recommendation methods employ two dimensional models to find relevant items. Such items are then recommended to a user. Further too many irrelevant recommendations worsen the information overload problem for a user. This happens because such models based on vectors and matrices are unable to find the latent relationships that exist between users and searches. Identifying user behaviour is a complex process, and usually involves comparing searches made by him. In most of the cases traditional vector and matrix based methods are used to find prominent features as searched by a user. In this research we employ tensors to find relevant features as searched by users. Such relevant features are then used for making recommendations. Evaluation on real datasets show the effectiveness of such recommendations over vector and matrix based methods.
Resumo:
The growing importance and need of data processing for information extraction is vital for Web databases. Due to the sheer size and volume of databases, retrieval of relevant information as needed by users has become a cumbersome process. Information seekers are faced by information overloading - too many result sets are returned for their queries. Moreover, too few or no results are returned if a specific query is asked. This paper proposes a ranking algorithm that gives higher preference to a user’s current search and also utilizes profile information in order to obtain the relevant results for a user’s query.
Resumo:
Search log data is multi dimensional data consisting of number of searches of multiple users with many searched parameters. This data can be used to identify a user’s interest in an item or object being searched. Identifying highest interests of a Web user from his search log data is a complex process. Based on a user’s previous searches, most recommendation methods employ two-dimensional models to find relevant items. Such items are then recommended to a user. Two-dimensional data models, when used to mine knowledge from such multi dimensional data may not be able to give good mappings of user and his searches. The major problem with such models is that they are unable to find the latent relationships that exist between different searched dimensions. In this research work, we utilize tensors to model the various searches made by a user. Such high dimensional data model is then used to extract the relationship between various dimensions, and find the prominent searched components. To achieve this, we have used popular tensor decomposition methods like PARAFAC, Tucker and HOSVD. All experiments and evaluation is done on real datasets, which clearly show the effectiveness of tensor models in finding prominent searched components in comparison to other widely used two-dimensional data models. Such top rated searched components are then given as recommendation to users.
Resumo:
We propose to use the Tensor Space Modeling (TSM) to represent and analyze the user’s web log data that consists of multiple interests and spans across multiple dimensions. Further we propose to use the decomposition factors of the Tensors for clustering the users based on similarity of search behaviour. Preliminary results show that the proposed method outperforms the traditional Vector Space Model (VSM) based clustering.