355 resultados para Tecnologie web, reingegnerizzazione, software aziendale
Resumo:
Qualitative research methods require transparency to ensure the ‘trustworthiness’ of the data analysis. The intricate processes of organizing, coding and analyzing the data are often rendered invisible in the presentation of the research findings, which requires a ‘leap of faith’ for the reader. Computer assisted data analysis software can be used to make the research process more transparent, without sacrificing rich, interpretive analysis by the researcher. This article describes in detail how one software package was used in a poststructural study to link and code multiple forms of data to four research questions for fine-grained analysis. This description will be useful for researchers seeking to use qualitative data analysis software as an analytic tool.
Resumo:
Current software tools for documenting and developing models of buildings focus on supporting a single user who is a specialist in the specific software used within their own discipline. Extensions to these tools for use by teams maintain the single discipline view and focus on version and file management. There is a perceived need in industry to have tools that specifically support collaboration among individuals from multiple disciplines with both a graphical representation of the design and a persistent data model. This project involves the development of a prototype of such a software tool. We have identified multi-user 3D virtual worlds as an appropriate software base for the development of a collaborative design tool. These worlds are inherently multi-user and therefore directly support collaboration through a sense of awareness of others in the virtual world, their location within the world, and provide various channels for direct and indirect communication. Such software platforms also provide a 3D building and modelling environment that can be adapted to the needs of the building and construction industry. DesignWorld is a prototype system for collaborative design developed by augmenting the Second Life (SL) commercial software platform1 with a collection web-based tools for communication and design. Agents manage communication between the 3D virtual world and the web-based tools. In addition, agents maintain a persistent external model of designs in the 3D world which can be augmented with data such as relationships, disciplines and versions not usually associated with 3D virtual worlds but required in design scenarios.
Resumo:
Durability issues of reinforced concrete construction cost millions of dollars in repair or demolition. Identification of the causes of degradation and a prediction of service life based on experience, judgement and local knowledge has limitations in addressing all the associated issues. The objective of this CRC CI research project is to develop a tool that will assist in the interpretation of the symptoms of degradation of concrete structures, estimate residual capacity and recommend cost effective solutions. This report is a documentation of the research undertaken in connection with this project. The primary focus of this research is centred on the case studies provided by Queensland Department of Main Roads (QDMR) and Brisbane City Council (BCC). These organisations are endowed with the responsibility of managing a huge volume of bridge infrastructure in the state of Queensland, Australia. The main issue to be addressed in managing these structures is the deterioration of bridge stock leading to a reduction in service life. Other issues such as political backlash, public inconvenience, approach land acquisitions are crucial but are not within the scope of this project. It is to be noted that deterioration is accentuated by aggressive environments such as salt water, acidic or sodic soils. Carse, 2005, has noted that the road authorities need to invest their first dollars in understanding their local concretes and optimising the durability performance of structures and then look at potential remedial strategies.
Resumo:
Computational biology increasingly demands the sharing of sophisticated data and annotations between research groups. Web 2.0 style sharing and publication requires that biological systems be described in well-defined, yet flexible and extensible formats which enhance exchange and re-use. In contrast to many of the standards for exchange in the genomic sciences, descriptions of biological sequences show a great diversity in format and function, impeding the definition and exchange of sequence patterns. In this presentation, we introduce BioPatML, an XML-based pattern description language that supports a wide range of patterns and allows the construction of complex, hierarchically structured patterns and pattern libraries. BioPatML unifies the diversity of current pattern description languages and fills a gap in the set of XML-based description languages for biological systems. We discuss the structure and elements of the language, and demonstrate its advantages on a series of applications, showing lightweight integration between the BioPatML parser and search engine, and the SilverGene genome browser. We conclude by describing our site to enable large scale pattern sharing, and our efforts to seed this repository.
Resumo:
The requirement to monitor the rapid pace of environmental change due to global warming and to human development is producing large volumes of data but placing much stress on the capacity of ecologists to store, analyse and visualise that data. To date, much of the data has been provided by low level sensors monitoring soil moisture, dissolved nutrients, light intensity, gas composition and the like. However, a significant part of an ecologist’s work is to obtain information about species diversity, distributions and relationships. This task typically requires the physical presence of an ecologist in the field, listening and watching for species of interest. It is an extremely difficult task to automate because of the higher order difficulties in bandwidth, data management and intelligent analysis if one wishes to emulate the highly trained eyes and ears of an ecologist. This paper is concerned with just one part of the bigger challenge of environmental monitoring – the acquisition and analysis of acoustic recordings of the environment. Our intention is to provide helpful tools to ecologists – tools that apply information technologies and computational technologies to all aspects of the acoustic environment. The on-line system which we are building in conjunction with ecologists offers an integrated approach to recording, data management and analysis. The ecologists we work with have different requirements and therefore we have adopted the toolbox approach, that is, we offer a number of different web services that can be concatenated according to need. In particular, one group of ecologists is concerned with identifying the presence or absence of species and their distributions in time and space. Another group, motivated by legislative requirements for measuring habitat condition, are interested in summary indices of environmental health. In both case, the key issues are scalability and automation.
Resumo:
Alvin Toffler’s image of the prosumer (1970, 1980, 1990) continues to influence in a significant way our understanding of the user-led, collaborative processes of content creation which are today labelled “social media” or “Web 2.0”. A closer look at Toffler’s own description of his prosumer model reveals, however, that it remains firmly grounded in the mass media age: the prosumer is clearly not the self-motivated creative originator and developer of new content which can today be observed in projects ranging from open source software through Wikipedia to Second Life, but simply a particularly well-informed, and therefore both particularly critical and particularly active, consumer. The highly specialised, high end consumers which exist in areas such as hi-fi or car culture are far more representative of the ideal prosumer than the participants in non-commercial (or as yet non-commercial) collaborative projects. And to expect Toffler’s 1970s model of the prosumer to describe these 21st-century phenomena was always an unrealistic expectation, of course. To describe the creative and collaborative participation which today characterises user-led projects such as Wikipedia, terms such as ‘production’ and ‘consumption’ are no longer particularly useful – even in laboured constructions such as ‘commons-based peer-production’ (Benkler 2006) or ‘p2p production’ (Bauwens 2005). In the user communities participating in such forms of content creation, roles as consumers and users have long begun to be inextricably interwoven with those as producer and creator: users are always already also able to be producers of the shared information collection, regardless of whether they are aware of that fact – they have taken on a new, hybrid role which may be best described as that of a produser (Bruns 2008). Projects which build on such produsage can be found in areas from open source software development through citizen journalism to Wikipedia, and beyond this also in multi-user online computer games, filesharing, and even in communities collaborating on the design of material goods. While addressing a range of different challenges, they nonetheless build on a small number of universal key principles. This paper documents these principles and indicates the possible implications of this transition from production and prosumption to produsage.
Resumo:
The road and transport industry in Australia and overseas has come a long way to understanding the impact of road traffic noise on the urban environment. Most road authorities now have guidelines to help assess and manage the impact of road traffic noise on noise-sensitive areas and development. While several economic studies across Australia and overseas have tried to value the impact of noise on property prices, decision-makers investing in road traffic noise management strategies have relatively limited historic data and case studies to go on. The perceived success of a noise management strategy currently relies largely on community expectations at a given time, and is not necessarily based on the analysis of the costs and benefits, or the long-term viability and value to the community of the proposed treatment options. With changing trends in urban design, it is essential that the 'whole-of-life' costs and benefits of noise ameliorative treatment options and strategies be identified and made available for decisionmakers in future investment considerations. For this reason, CRC for Construction Innovation Australia funded a research project, Noise Management in Urban Environments to help decision-makers with future road traffic noise management investment decisions. RMIT University and the Queensland Department of Main Roads (QDMR) have conducted the research work, in collaboration with the Queensland Department of Public Works, ARUP Pty Ltd, and the Queensland University of Technology. The research has formed the basis for the development of a decision-support software tool, and helped collate technical and costing data for known noise amelioration treatment options. We intend that the decision support software tool (DST) should help an investment decision-maker to be better informed of suitable noise ameliorative treatment options on a project-by-project basis and identify likely costs and benefits associated with each of those options. This handbook has been prepared as a procedural guide for conducting a comparative assessment of noise ameliorative options. The handbook outlines the methodology and assumptions adopted in the decision-support framework for the investment decision-maker and user of the DST. The DST has been developed to provide an integrated user-friendly interface between road traffic noise modelling software, the relevant assessment criteria and the options analysis process. A user guide for the DST is incorporated in this handbook.
Resumo:
Maps have been published on the world wide web since its inception (Cartwright, 1999) and are still accessed and viewed by millions of users today (Peterson, 2003). While early webbased GIS products lacked a complete set of cartographic capabilities, the functionality within such systems has significantly increased over recent years. Functionalities once found only in desktop GIS products are now available in web-based GIS applications, for example, data entry, basic editing, and analysis. Applications based on web-GIS are becoming more widespread and the web-based GIS environment is replacing the traditional desktop GIS platforms in many organizations. Therefore, development of a new cartographic method for web-based GIS is vital. The broad aim of this project is to examine and discuss the challenges and opportunities of innovative cartography methods for web-based GIS platforms. The work introduces a recently developed cartographic methodology, which is based on a web-based GIS portal by the Survey of Israel (SOI). The work discusses the prospects and constraints of such methods in improving web-GIS interfaces and usability for the end user. The work also tables the preliminary findings of the initial implementation of the web-based GIS cartographic method within the portal of the Survey of Israel, as well as the applicability of those methods elsewhere.
Resumo:
Decision support systems (DSS) have evolved rapidly during the last decade from stand alone or limited networked solutions to online participatory solutions. One of the major enablers of this change is the fastest growing areas of geographical information system (GIS) technology development that relates to the use of the Internet as a means to access, display, and analyze geospatial data remotely. World-wide many federal, state, and particularly local governments are designing to facilitate data sharing using interactive Internet map servers. This new generation DSS or planning support systems (PSS), interactive Internet map server, is the solution for delivering dynamic maps and GIS data and services via the world-wide Web, and providing public participatory GIS (PPGIS) opportunities to a wider community (Carver, 2001; Jankowski & Nyerges, 2001). It provides a highly scalable framework for GIS Web publishing, Web-based public participatory GIS (WPPGIS), which meets the needs of corporate intranets and demands of worldwide Internet access (Craig, 2002). The establishment of WPPGIS provides spatial data access through a support centre or a GIS portal to facilitate efficient access to and sharing of related geospatial data (Yigitcanlar, Baum, & Stimson, 2003). As more and more public and private entities adopt WPPGIS technology, the importance and complexity of facilitating geospatial data sharing is growing rapidly (Carver, 2003). Therefore, this article focuses on the online public participation dimension of the GIS technology. The article provides an overview of recent literature on GIS and WPPGIS, and includes a discussion on the potential use of these technologies in providing a democratic platform for the public in decision-making.
Resumo:
Alvin Tofflers Bild des Prosumers beeinflußt weiterhin maßgeblich unser Verständnis vieler heutzutage als „Social Media“ oder „Web 2.0“ beschriebener nutzergesteuerter, kollaborativer Prozesse der Inhaltserstellung. Ein genauerer Blick auf Tofflers eigene Beschreibung seines Prosumermodells offenbart jedoch, daß es fest im Zeitalter der Massenmedienvorherrschaft verankert bleibt: der Prosumer ist eben nicht jener aus eigenem Antrieb aktive, kreative Ersteller und Weiterbearbeiter neuer Inhalte, wie er heutzutage in Projekten von der Open-Source-Software über die Wikipedia bis hin zu Second Life zu finden ist, sondern nur ein ganz besonders gut informierter, und daher in seinem Konsumverhalten sowohl besonders kritischer als auch besonders aktiver Konsument. Hochspezialisierte, High-End-Konsumenten etwa im Hi-Fi- oder Automobilbereich stellen viel eher das Idealbild des Prosumers dar als das für Mitarbeiter in oft eben gerade nicht (oder zumindest noch nicht) kommerziell erfaßten nutzergesteuerten Kollaborationsprojekten der Fall ist. Solches von Tofflers in den 70ern erarbeiteten Modells zu erwarten, ist sicherlich ohnehin zuviel verlangt. Das Problem liegt also nicht bei Toffler selbst, sondern vielmehr in den im Industriezeitalter vorherrschenden Vorstellungen eines recht deutlich in Produktion, Distribution, und Konsum eingeteilten Prozesses. Diese Dreiteilung war für die Erschaffung materieller wie immaterieller Güter durchaus notwendig – sie ist selbst für die konventionellen Massenmedien zutreffend, bei denen Inhaltsproduktion ebenso aus kommerziellen Gründen auf einige wenige Institutionen konzentriert war wie das für die Produktion von Konsumgütern der Fall ist. Im beginnenden Informationszeitalter, beherrscht durch dezentralisierte Mediennetzwerke und weithin erhaltbare und erschwingliche Produktionsmittel, liegt der Fall jedoch anders. Was passiert, wenn Distribution automatisch erfolgt, und wenn beinahe jeder Konsument auch Produzent sein kann, anstelle einer kleinen Schar von kommerziell unterstützten Produzenten, denen bestenfallls vielleicht eine Handvoll von nahezu professionellen Prosumern zur Seite steht? Was geschieht, wenn sich die Zahl der von Eric von Hippel als ‚lead user’ beschriebenen als Produzenten aktiven Konsumenten massiv ausdehnt – wenn, wie Wikipedias Slogan es beschreibt, ‚anyone can edit’, wenn also potentiell jeder Nutzer aktiv an der Inhaltserstellung teilnehmen kann? Um die kreative und kollaborative Beteiligung zu beschreiben, die heutzutage nutzergesteuerte Projekte wie etwa die Wikipedia auszeichnet, sind Begriffe wie ‚Produktion’ und ‚Konsum’ nur noch bedingt nützlich – selbst in Konstruktionen wie 'nutzergesteuerte Produktion' oder 'P2P-Produktion'. In den Nutzergemeinschaften, die an solchen Formen der Inhaltserschaffung teilnehmen, haben sich Rollen als Konsumenten und Benutzer längst unwiederbringlich mit solchen als Produzent vermischt: Nutzer sind immer auch unausweichlich Produzenten der gemeinsamen Informationssammlung, ganz egal, ob sie sich dessens auch bewußt sind: sie haben eine neue, hybride Rolle angenommen, die sich vielleicht am besten als 'Produtzer' umschreiben lassen kann. Projekte, die auf solche Produtzung (Englisch: produsage) aufbauen, finden sich in Bereichen von Open-Source-Software über Bürgerjournalismus bis hin zur Wikipedia, und darüberhinaus auch zunehmend in Computerspielen, Filesharing, und selbst im Design materieller Güter. Obwohl unterschiedlich in ihrer Ausrichtung, bauen sie doch auf eine kleine Zahl universeller Grundprinzipien auf. Dieser Vortrag beschreibt diese Grundprinzipien, und zeigt die möglichen Implikationen dieses Übergangs von Produktion (und Prosumption) zu Produtzung auf.
Resumo:
Security-critical communications devices must be evaluated to the highest possible standards before they can be deployed. This process includes tracing potential information flow through the device's electronic circuitry, for each of the device's operating modes. Increasingly, however, security functionality is being entrusted to embedded software running on microprocessors within such devices, so new strategies are needed for integrating information flow analyses of embedded program code with hardware analyses. Here we show how standard compiler principles can augment high-integrity security evaluations to allow seamless tracing of information flow through both the hardware and software of embedded systems. This is done by unifying input/output statements in embedded program execution paths with the hardware pins they access, and by associating significant software states with corresponding operating modes of the surrounding electronic circuitry.
Resumo:
The management of main material prices of provincial highway project quota has problems of lag and blindness. Framework of provincial highway project quota data MIS and main material price data warehouse were established based on WEB firstly. Then concrete processes of provincial highway project main material prices were brought forward based on BP neural network algorithmic. After that standard BP algorithmic, additional momentum modify BP network algorithmic, self-adaptive study speed improved BP network algorithmic were compared in predicting highway project main prices. The result indicated that it is feasible to predict highway main material prices using BP NN, and using self-adaptive study speed improved BP network algorithmic is the relatively best one.
Resumo:
For many organizations, maintaining and upgrading enterprise resource planning (ERP) systems (large packaged application software) is often far more costly than the initial implementation. Systematic planning and knowledge of the fundamental maintenance processes and maintenance-related management data are required in order to effectively and efficiently administer maintenance activities. This paper reports a revelatory case study of Government Services Provider (GSP), a high-performing ERP service provider to government agencies in Australia. GSP ERP maintenance-process and maintenance-data standards are compared with the IEEE/EIA 12207 software engineering standard for custom software, also drawing upon published research, to identify how practices in the ERP context diverge from the IEEE standard. While the results show that many best practices reflected in the IEEE standard have broad relevance to software generally, divergent practices in the ERP context necessitate a shift in management focus, additional responsibilities, and different maintenance decision criteria. Study findings may provide useful guidance to practitioners, as well as input to the IEEE and other related standards.
Resumo:
The Internet theoretically enables marketers to personalize a Website to an individual consumer. This article examines optimal Website design from the perspective of personality trait theory and resource-matching theory. The influence of two traits relevant to Internet Web-site processing—sensation seeking and need for cognition— were studied in the context of resource matching and different levels of Web-site complexity. Data were collected at two points of time: personality-trait data and a laboratory experiment using constructed Web sites. Results reveal that (a) subjects prefer Web sites of a medium level of complexity, rather than high or low complexity; (b)high sensation seekers prefer complex visual designs, and low sensation seekers simple visual designs, both in Web sites of medium complexity; and (c) high need-for-cognition subjects evaluated Web sites with high verbal and low visual complexity more favourably.