355 resultados para Tecnologie web, reingegnerizzazione, software aziendale
Resumo:
John Frazer's architectural work is inspired by living and generative processes. Both evolutionary and revolutionary, it explores informatin ecologies and the dynamics of the spaces between objects. Fuelled by an interest in the cybernetic work of Gordon Pask and Norbert Wiener, and the possibilities of the computer and the "new science" it has facilitated, Frazer and his team of collaborators have conducted a series of experiments that utilize genetic algorithms, cellular automata, emergent behaviour, complexity and feedback loops to create a truly dynamic architecture. Frazer studied at the Architectural Association (AA) in London from 1963 to 1969, and later became unit master of Diploma Unit 11 there. He was subsequently Director of Computer-Aided Design at the University of Ulter - a post he held while writing An Evolutionary Architecture in 1995 - and a lecturer at the University of Cambridge. In 1983 he co-founded Autographics Software Ltd, which pioneered microprocessor graphics. Frazer was awarded a person chair at the University of Ulster in 1984. In Frazer's hands, architecture becomes machine-readable, formally open-ended and responsive. His work as computer consultant to Cedric Price's Generator Project of 1976 (see P84)led to the development of a series of tools and processes; these have resulted in projects such as the Calbuild Kit (1985) and the Universal Constructor (1990). These subsequent computer-orientated architectural machines are makers of architectural form beyond the full control of the architect-programmer. Frazer makes much reference to the multi-celled relationships found in nature, and their ongoing morphosis in response to continually changing contextual criteria. He defines the elements that describe his evolutionary architectural model thus: "A genetic code script, rules for the development of the code, mapping of the code to a virtual model, the nature of the environment for the development of the model and, most importantly, the criteria for selection. In setting out these parameters for designing evolutionary architectures, Frazer goes beyond the usual notions of architectural beauty and aesthetics. Nevertheless his work is not without an aesthetic: some pieces are a frenzy of mad wire, while others have a modularity that is reminiscent of biological form. Algorithms form the basis of Frazer's designs. These algorithms determine a variety of formal results dependent on the nature of the information they are given. His work, therefore, is always dynamic, always evolving and always different. Designing with algorithms is also critical to other architects featured in this book, such as Marcos Novak (see p150). Frazer has made an unparalleled contribution to defining architectural possibilities for the twenty-first century, and remains an inspiration to architects seeking to create responsive environments. Architects were initially slow to pick up on the opportunities that the computer provides. These opportunities are both representational and spatial: computers can help architects draw buildings and, more importantly, they can help architects create varied spaces, both virtual and actual. Frazer's work was groundbreaking in this respect, and well before its time.
Resumo:
The majority of the world’s citizens now live in cities. Although urban planning can thus be thought of as a field with significant ramifications on the human condition, many practitioners feel that it has reached the crossroads in thought leadership between traditional practice and a new, more participatory and open approach. Conventional ways to engage people in participatory planning exercises are limited in reach and scope. At the same time, socio-cultural trends and technology innovation offer opportunities to re-think the status quo in urban planning. Neogeography introduces tools and services that allow non-geographers to use advanced geographical information systems. Similarly, is there potential for the emergence of a neo-planning paradigm in which urban planning is carried out through active civic engagement aided by Web 2.0 and new media technologies thus redefining the role of practicing planners? This paper traces a number of evolving links between urban planning, neogeography and information and communication technology. Two significant trends – participation and visualisation – with direct implications for urban planning are discussed. Combining advanced participation and visualisation features, the popular virtual reality environment Second Life is then introduced as a test bed to explore a planning workshop and an integrated software event framework to assist narrative generation. We discuss an approach to harness and analyse narratives using virtual reality logging to make transparent how users understand and interpret proposed urban designs.
Resumo:
Purpose – This paper compares the experiential consumption values that motivate consumer choice to purchase online for both male and female purchasers and non-purchasers. Design/methodology/approach – Using the theory of consumption value the study examines gendered perceptions of the functional, social and conditional value of using a virtual consumption setting for purchasing. Data was collected through an online survey and analysed using multiple discriminant analysis to determine meaningful differences between male and female purchasers and non-purchasers. Findings – The findings show that male online purchasers are discriminated from female purchasers by social value and from male non-purchasers by conditional value. Female purchasers are discriminated from male purchasers by functional value and from female non-purchasers by social value. Female non-purchasers are discriminated from female purchasers by conditional value. Male non-purchasers are discriminated from male purchasers by functional and social value. Research limitations/implications – Limitations include using an Internet survey and an Australian sample which may impact the generalisability of the findings to a wider population of Internet users. Future research should involve replication of the study in a country more or less developed in terms of gender composition of internet users to extend the generalisability of the findings. Additionally, researchers should examine whether other dimensions of consumption value,such as social influence through on- and off-line communication networks, may influence consumer choice to purchase online. Practical implications – The study provides practical implications for marketers to leverage consumption values that influence male and female consumers’ choice to purchase online and then drive their behaviour online through integrated marketing campaigns that involve both on- and offline strategies. Originality/value – The research makes an original contribution to the consumer behaviour literature as to date, no research has been found that undertakes such a comprehensive gender-based comparison of the perceived value of using a virtual consumption setting for purchasing.
Resumo:
SCOOT is a hybrid event combining the web, mobile devices, public displays and cultural artifacts across multiple public parks and museums in an effort to increase the perceived and actual access to cultural participation by everyday people. The research field is locative game design and the context was the re-invigoration of public sites as a means for exposing the underlying histories of sites and events. The key question was how to use game play technologies and processes within everyday places in ways that best promote playful and culturally meaningful experiences whilst shifting the loci of control away from commercial and governmental powers. The research methodology was primarily practice led underpinned by ethnographic and action research methods. In 2004 SCOOT established itself as a national leader in the field by demonstrating innovative methods for stimulating rich interactions across diverse urban places using technically-augmented game play. Despite creating a sophisticated range of software and communication tools SCOOT most dramatically highlighted the role of the ubiquitous mobile phone in facilitating socially beneficial experiences. Through working closely with the SCOOT team, collaborating organisations developed important new knowledge around the potential of new technologies and processes for motivating, sustaining and reinvigorating public engagement. Since 2004, SCOOT has been awarded $600,00 in competitive and community funding as well as countless in kind support from partner organisations such as Arts Victoria, National Gallery of Victoria, Melbourne Museum, Australian Centre for the Moving Image, Federation Square, Art Centre of Victoria, The State Library of Victoria, Brisbane River Festival, State Library of Queensland, Brisbane Maritime Museum, Queensland University of Technology, and Victoria University.
Resumo:
This paper explores how game authoring tools can teach processes that transform everyday places into engaging learning spaces. It discusses the motivation inherent in playing games and creating games for others, and how this stimulates an iterative process of creation and reflection and evokes a natural desire to engage in learning. The use of MiLK at the Adelaide Botanic Gardens is offered as a case in point. MiLK is an authoring tool that allows students and teachers to create and share SMS games for mobile phones. A group of South Australian high school students used MiLK to play a game, create their own games and play each other’s games during a day at the gardens. This paper details the learning processes involved in these activities and how the students, without prompting, reflected on their learning, conducted peer assessment, and engaged in a two-way discussion with their teacher about new technologies and their implications for learning. The paper concludes with a discussion of the needs and requirements of 21st century learners and how MiLK can support constructivist and connectivist teaching methods that engage learners and will produce an appropriately skilled future workforce.
Resumo:
The role of sustainability in urban design is becoming increasingly important as Australia’s cities continue to grow, putting pressure on existing infrastructure such as water, energy and transport. To optimise an urban design many different aspects such as water, energy, transport, costs need to be taken into account integrally. Integrated software applications assessing urban designs on a large variety of aspects are hardly available. With the upcoming next generation of the Internet often referred to as the Semantic Web, data can become more machine-interpretable by developing ontologies that can support the development of integrated software systems. Software systems can use these ontologies to perform an intelligent task such as assessing an urban design on a particular aspect. When ontologies of different applications are aligned, they can share information resulting in interoperability. Inference such as compliancy checks and classifications can support aligning the ontologies. A proof of concept implementation has been made to demonstrate and validate the usefulness of machine interpretable ontologies for urban designs.
Resumo:
This research investigates the prevalence of sports-related terms among the Web sites of the world’s leading companies, the Fortune Global 500. An automated process copied about four gigabytes of textual data, around 70 million words, from their sites. The subsequent analysis revealed regional and industry differences in the distribution of sports-related terms, the popularity of tennis stars and few references to sports stars, especially in Asia.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
Despite the increased offering of online communication channels to support web-based retail systems, there is limited marketing research that investigates how these channels act singly, or in combination with online channels, to influence an individual' s intention to purchase online. If the marketer's strategy is to encourage online transactions, this requires a focus on consumer acceptance of the web-based transaction technology, rather than the purchase of the products per se. The exploratory study reported in this paper examines normative influences from referent groups in an individual's on and offline social communication networks that might affect their intention to use online transaction facilities. The findings suggest that for non-adopters, there is no normative influence from referents in either network. For adopters, one online and one offline referent norm positively influenced this group's intentions to use online transaction facilities. The implications of these findings are discussed together with future research directions.
Resumo:
Business Service Management describes the emerging discipline dedicated to the IT-enabled management of services as corporate assets. Business Service Management deals with the service orientation of the organisation and the provisioning and use of business services. The term business service describes an autonomous transformational capability that is offered to and consumed by external or internal customers for their benefit. The prefix ‘business’ stresses that such a service has a market value, requires the ability to be managed internally as a corporate asset and that its implementation is technology-agnostic. While business services (or so called capabilities) have attracted the attention of many vendors and organisations, a lack of understanding of the activities required for the successful management of such business services remains a critical issue. In order to fill this gap, a framework consisting of Service Lifecycle Management, Service Value Management, Service Relationship Management and Service Enablement is proposed. This Framework has the potential to provide organisations with the much needed guidance in their attempts to convert current IT-driven service initiatives into successful service-centric business models.
Resumo:
Although the benefits of service orientation are prevalent in literature, a review, analysis, and evaluation of the 30 existing service analysis approaches presented in this paper have shown that a comprehensive approach to the identification and analysis of both business and supporting software services is missing. Based on this evaluation of existing approaches and additional sources, we close this gap by proposing an integrated, consolidated approach to business and software service analysis that combines and extends the strengths of the examined methodologies.
Resumo:
Although the service-oriented paradigm has been well established in the technical domain for quite some time now, service governance is still considered a research gap. To ensure adequate governance, there is a necessity to manage services as first-class assets throughout the lifecycle. Now that the concept of ser-vice-orientation is also increasingly applied on the business level to structure an organisation’s capabili-ties, the problem has become an even bigger chal-lenge. This paper presents a generic business and software service lifecycle and aligns it with the com-mon management layers in organisations. Using ser-vice analysis as an example, it moreover illustrates how activities in the service lifecycle may vary on lower levels of granularity depending on the focus on business or software services.
Resumo:
This report demonstrates the development of: • Development of software agents for data mining • Link data mining to building model in virtual environments • Link knowledge development with building model in virtual environments • Demonstration of software agents for data mining • Populate with maintenance data
Resumo:
This report presents the demonstration of software agents prototype system for improving maintenance management [AIMM] including: • Developing and implementing a user focused approach for mining the maintenance data of buildings. This report presents the demonstration of software agents prototype system for improving maintenance management [AIMM] including: • Developing and implementing a user focused approach for mining the maintenance data of buildings. • Refining the development of a multi agent system for data mining in virtual environments (Active Worlds) by developing and implementing a filtering agent on the results obtained from applying data mining techniques on the maintenance data. • Integrating the filtering agent within the multi agents system in an interactive networked multi-user 3D virtual environment. • Populating maintenance data and discovering new rules of knowledge.