355 resultados para Tecnologie web, reingegnerizzazione, software aziendale
Resumo:
This paper attempts to develop a theoretical acceptance model for measuring Web personalization success. Key factors impacting Web personalization acceptance are identified from a detailed literature review. The final model is then cast in a structural equation modeling (SEM) framework comprising nineteen manifest variables, which are grouped into three focal behaviors of Web users. These variables could provide a framework for better understanding of numerous factors that contribute to the success measures of Web personalization technology. Especially, those concerning the quality of personalized features and how personalized information through personalized Website can be delivered to the user. The interrelationship between success constructs is also explained. Empirical validations of this theoretical model are expected on future research.
Resumo:
The INEX 2010 Focused Relevance Feedback track offered a refined approach to the evaluation of Focused Relevance Feedback algorithms through simulated exhaustive user feedback. As in traditional approaches we simulated a user-in-the loop by re-using the assessments of ad-hoc retrieval obtained from real users who assess focused ad-hoc retrieval submissions. The evaluation was extended in several ways: the use of exhaustive relevance feedback over entire runs; the evaluation of focused retrieval where both the retrieval results and the feedback are focused; the evaluation was performed over a closed set of documents and complete focused assessments; the evaluation was performed over executable implementations of relevance feedback algorithms; and �finally, the entire evaluation platform is reusable. We present the evaluation methodology, its implementation, and experimental results obtained for nine submissions from three participating organisations.
Resumo:
This article looks at a Chinese Web 2.0 original literature site, Qidian, in order to show the coevolution of market and non-market initiatives. The analytic framework of social network markets (Potts et al., 2008) is employed to analyse the motivations of publishing original literature works online and to understand the support mechanisms of the site, which encourage readers’ willingness to pay for user-generated content. The co-existence of socio-cultural and commercial economies and their impact on the successful business model of the site are illustrated in this case. This article extends the concept of social network markets by proposing the existence of a ripple effect of social network markets through convergence between PC and mobile internet, traditional and internet publishing, and between publishing and other cultural industries. It also examines the side effects of social network markets, and the role of market and non-market strategies in addressing the issues.
Resumo:
This paper presents the details of experimental studies on the shear strength of a recently developed, cold-formed steel beam known as LiteSteel Beam (LSB) with web openings. The innovative LSB sections have the beneficial characteristics of torsionally rigid closed rectangular flanges combined with economical fabrication processes from a single strip of high strength steel. They combine the stability of hot-rolled steel sections with the high strength to weight ratio of conventional cold-formed steel sections. The LSB sections are commonly used as flexural members in the building industry. Current practice in flooring systems is to include openings in the web element of floor joists or bearers so that building services can be located within them. Shear behaviour of LSBs with web openings is more complicated while their shear strengths are considerably reduced by the presence of web openings. However, limited research has been undertaken on the shear behaviour and strength of LSBs with web openings. Therefore a detailed experimental study involving 26 shear tests was undertaken to investigate the shear behaviour and strength of different LSB sections. Simply supported test specimens of LSBs with an aspect ratio of 1.5 were loaded at midspan until failure. This paper presents the details of this experimental study and the results. Experimental results showed that the current design rules in cold-formed steel structures design codes (AS/NZS 4600) [1] are very conservative for the shear design of LSBs with web openings. Improved design equations have been proposed for the shear strength of LSBs with web openings based on experimental results from this study.
Resumo:
This paper presents the details of numerical studies on the shear strength of a recently devel-oped, cold-formed steel channel beam known as LiteSteel Beam (LSB) with web openings. The LSB sections are commonly used as floor joists and bearers in residential, industrial and commercial buildings. In these ap-plications they often include web openings for the purpose of locating services. This has raised concerns over the shear capacity of LSB floor joists and bearers. Therefore experimental and numerical studies were under-taken to investigate the shear behavior and strength of LSBs with web openings. In this research, finite ele-ment models of LSBs with web openings in shear were developed to simulate the shear behavior of LSBs. It was found that currently available design equations are conservative or unconservative for the shear design of LSBs with web openings. Improved design equations have been proposed for the shear capacity of LSBs with web openings based on both experimental and numerical study results.
Resumo:
A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.
Resumo:
Stigmergy is a biological term used when discussing insect or swarm behaviour, and describes a model supporting environmental communication separately from artefacts or agents. This phenomenon is demonstrated in the behavior of ants and their food gathering process when following pheromone trails, or similarly termites and their termite mound building process. What is interesting with this mechanism is that highly organized societies are achieved with a lack of any apparent management structure. Stigmergic behavior is implicit in the Web where the volume of users provides a self-organizing and self-contextualization of content in sites which facilitate collaboration. However, the majority of content is generated by a minority of the Web participants. A significant contribution from this research would be to create a model of Web stigmergy, identifying virtual pheromones and their importance in the collaborative process. This paper explores how exploiting stigmergy has the potential of providing a valuable mechanism for identifying and analyzing online user behavior recording actionable knowledge otherwise lost in the existing web interaction dynamics. Ultimately this might assist our building better collaborative Web sites.
Resumo:
know personally. They also communicate with other members of the network who are the friends of their friends and may be friends of their friend’s network. They share their experiences and opinions within the social network about an item which may be a product or service. The user faces the problem of evaluating trust in a service or service provider before making a choice. Opinions, reputations and ecommendations will influence users' choice and usage of online resources. Recommendations may be received through a chain of friends of friends, so the problem for the user is to be able to evaluate various types of trust recommendations and reputations. This opinion or ecommendation has a great influence to choose to use or enjoy the item by the other user of the community. Users share information on the level of trust they explicitly assign to other users. This trust can be used to determine while taking decision based on any recommendation. In case of the absence of direct connection of the recommender user, propagated trust could be useful.
Resumo:
This paper is a summary of a PhD thesis proposal. It will explore how the Web 2.0 platform could be applied to enable and facilitate the large-scale participation, deliberation and collaboration of both governmental and non-governmental actors in an ICT supported policy process. The paper will introduce a new democratic theory and a Web 2.0 based e-democracy platform, and demonstrate how different actors would use the platform to develop and justify policy issues.
Resumo:
This paper considers the problem of building a software architecture for a human-robot team. The objective of the team is to build a multi-attribute map of the world by performing information fusion. A decentralized approach to information fusion is adopted to achieve the system properties of scalability and survivability. Decentralization imposes constraints on the design of the architecture and its implementation. We show how a Component-Based Software Engineering approach can address these constraints. The architecture is implemented using Orca – a component-based software framework for robotic systems. Experimental results from a deployed system comprised of an unmanned air vehicle, a ground vehicle, and two human operators are presented. A section on the lessons learned is included which may be applicable to other distributed systems with complex algorithms. We also compare Orca to the Player software framework in the context of distributed systems.
Resumo:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Resumo:
Countless factors affect the inner workings of a city, so in an attempt to gain an understanding of place and making sound decisions, planners need to utilize decision support systems (DSS) or planning support systems (PSS). PSS were originally developed as DSS in academia for experimental purposes, but like many other technologies, they became one of the most innovative technologies in parallel to rapid developments in software engineering as well as developments and advances in networks and hardware. Particularly, in the last decade, the awareness of PSS have been dramatically heightened with the increasing demand for a better, more reliable and furthermore a transparent decision-making process (Klosterman, Siebert, Hoque, Kim, & Parveen, 2003). Urban planning as an act has quite different perspective from the PSS point of view. The unique nature of planning requires that spatial dimension must be considered within the context of PSS. Additionally, the rapid changes in socio-economic structure cannot be easily monitored or controlled without an effective PSS.
Resumo:
Increasingly, almost everything we do in our daily lives is being influenced by information and communications technologies (ICTs) including the Internet. The task of governance is no exception with an increasing number of national, state, and local governments utilizing ICTs to support government operations, engage citizens, and provide government services. As with other things, the process of governance is now being prefixed with an “e”. E-governance can range from simple Web sites that convey basic information to complex sites that transform the customary ways of delivering all sorts of government services. In this respect local e-government is the form of e-governance that specifically focuses on the online delivery of suitable local services by local authorities. In practice local e-government reflects four dimensions, each one dealing with the functions of government itself. The four are: (a) e-services, the electronic delivery of government information, programs, and services often over the Internet; (b) e-management, the use of information technology to improve the management of government. This might range from streamlining business processes to improving the flow of information within government departments; (c) e-democracy the use of electronic communication vehicles, such as e-mail and the Internet, to increase citizen participation in the public decision-making process; (d) e-commerce, the exchange of money for goods and services over the Internet which might include citizens paying taxes and utility bills, renewing vehicle registrations, and paying for recreation programs, or government buying office supplies and auctioning surplus equipment (Cook, LaVigne, Pagano, Dawes, & Pardo, 2002). Commensurate with the rapid increase in the process of developing e-governance tools, there has been an increased interest in benchmarking the process of local e-governance. This benchmarking, which includes the processes involved in e-governance as well as the extent of e-governance adoption or take-up is important as it allows for improved processes and enables government agencies to move towards world best practice. It is within this context that this article discusses benchmarking local e-government. It brings together a number of discussions regarding the significance of benchmarking, best practices and actions for local e-government, and key elements of a successful local e-government project.
Resumo:
Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.