994 resultados para Collaborative Software
Resumo:
La diffusione dei servizi cloud ha spinto anche il mondo degli IDE verso questa direzione. Recentemente si sta assistendo allo spostamento degli IDE da ambienti desktop ad ambienti Web. Questo è determinante per quanto riguarda gli aspetti legati alla collaborazione perchè permette di sfruttare tutti i vantaggi del cloud per dotare questi sistemi di chat, integrazione con i social network, strumenti di editing condiviso e molte altre funzionalità collaborative. Questi IDE sono detti browser-based in quanto i servizi che mettono a disposizione sono accessibili via Web tramite un browser. Ne esistono di diversi tipi e con caratteristiche molto diverse tra di loro. Alcuni sono semplici piattaforme sulle quali è possibile effettuare test di codice o utilizzare tutorial forniti per imparare nuovi linguaggi di programmazione; altri invece sono ambienti di sviluppo completi dotati delle più comuni funzionalità presenti in un IDE desktop, oltre a quelle specifiche legate al Web. Dallo studio di questi ambienti di sviluppo di nuova generazione è emerso che sono pochi quelli che dispongono di un sistema di collaborazione completo e che non tutti sfruttano le nuove tecnologie che il Web mette a disposizione. Per esempio, alcuni sono dotati di editor collaborativi, ma non offrono un servizio di chat ai collaboratori; altri mettono a disposizione una chat e il supporto per la scrittura simultanea di codice, ma non sono dotati di sistemi per la condivisione del display. Dopo l'analisi dei pregi e dei difetti della collaborazione fornita dagli strumenti presi in considerazione ho deciso di realizzare delle funzionalità collaborative inserendomi nel contesto di un IDE browser-based chiamato InDe RT sviluppato dall'azienda Pro Gamma SpA.
Resumo:
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. We detail some of the design decisions, software paradigms and operational strategies that have allowed a small number of researchers to provide a wide variety of innovative, extensible, software solutions in a relatively short time. The use of an object oriented programming paradigm, the adoption and development of a software package system, designing by contract, distributed development and collaboration with other projects are elements of this project's success. Individually, each of these concepts are useful and important but when combined they have provided a strong basis for rapid development and deployment of innovative and flexible research software for scientific computation. A primary objective of this initiative is achievement of total remote reproducibility of novel algorithmic research results.
Resumo:
The goal of this paper is to show the results of an on-going experience on teaching project management to grade students by following a development scheme of management related competencies on an individual basis. In order to achieve that goal, the students are organized in teams that must solve a problem and manage the development of a feasible solution to satisfy the needs of a client. The innovative component advocated in this paper is the formal introduction of negotiating and virtual team management aspects, as different teams from different universities at different locations and comprising students with different backgrounds must collaborate and compete amongst them. The different learning aspects are identified and the improvement levels are reflected in a rubric that has been designed ad hoc for this experience. Finally, the effort frameworks for the student and instructor have been established according to the requirements of the Bologna paradigms. This experience is developed through a software-based support system allowing blended learning for the theoretical and individual?s work aspects, blogs, wikis, etc., as well as project management tools based on WWW that allow the monitoring of not only the expected deliverables and the achievement of the goals but also the progress made on learning as established in the defined rubric
Resumo:
Social software tools have become an integral part of students? personal lives and their primary communication medium. Likewise, these tools are increasingly entering the enterprise world (within the recent trend known as Enterprise 2.0) and becoming a part of everyday work routines. Aiming to keep the pace with the job requirements and also to position learning as an integral part of students? life, the field of education is challenged to embrace social software. Personal Learning Environments (PLEs) emerged as a concept that makes use of social software to facilitate collaboration, knowledge sharing, group formation around common interests, active participation and reflective thinking in online learning settings. Furthermore, social software allows for establishing and maintaining one?s presence in the online world. By being aware of a student's online presence, a PLE is better able to personalize the learning settings, e.g., through recommendation of content to use or people to collaborate with. Aiming to explore the potentials of online presence for the provision of recommendations in PLEs, in the scope of the OP4L project, we have develop a software solution that is based on a synergy of Semantic Web technologies, online presence and socially-oriented learning theories. In this paper we present the current results of this research work.
Resumo:
Security intrusions in large systems is a problem due to its lack of scalability with the current IDS-based approaches. This paper describes the RECLAMO project, where an architecture for an Automated Intrusion Response System (AIRS) is being proposed. This system will infer the most appropriate response for a given attack, taking into account the attack type, context information, and the trust and reputation of the reporting IDSs. RECLAMO is proposing a novel approach: diverting the attack to a specific honeynet that has been dynamically built based on the attack information. Among all components forming the RECLAMO's architecture, this paper is mainly focused on defining a trust and reputation management model, essential to recognize if IDSs are exposing an honest behavior in order to accept their alerts as true. Experimental results confirm that our model helps to encourage or discourage the launch of the automatic reaction process.
Resumo:
One of the objectives of the European Higher Education Area is the promotion of collaborative and informal learning through the implementation of educational practices. 3D virtual environments become an ideal space for such activities. On the other hand, the problem of financing in Spanish universities has led to the search for new ways to optimize available resources. The Technical University of Madrid requires the use of laboratories which due to their dangerousness, duration or control of the developed processes are difficult to perform in real life. For this reason, we have developed several 3D laboratories in virtual environment. The laboratories are built on open source platform OpenSim. In this paper it is exposed the use of the OpenSim platform for these new teaching experiences and the new design of the software architecture. This architecture requires the adaptation of the platform to the needs of the users and the different laboratories of our University. We will explain the structure of the implemented architecture and the process of creating and configuring it. The proposed architecture is decentralized, each laboratory is housed in different an educational center. The architecture adds several services, among others, the creation and management of users automated, communication between external services and platforms in different program languages. Therefore, we achieve improving the user experience and rising the functionalities of laboratories.
Resumo:
En las últimas dos décadas, se ha puesto de relieve la importancia de los procesos de adquisición y difusión del conocimiento dentro de las empresas, y por consiguiente el estudio de estos procesos y la implementación de tecnologías que los faciliten ha sido un tema que ha despertado un creciente interés en la comunidad científica. Con el fin de facilitar y optimizar la adquisición y la difusión del conocimiento, las organizaciones jerárquicas han evolucionado hacia una configuración más plana, con estructuras en red que resulten más ágiles, disminuyendo la dependencia de una autoridad centralizada, y constituyendo organizaciones orientadas a trabajar en equipo. Al mismo tiempo, se ha producido un rápido desarrollo de las herramientas de colaboración Web 2.0, tales como blogs y wikis. Estas herramientas de colaboración se caracterizan por una importante componente social, y pueden alcanzar todo su potencial cuando se despliegan en las estructuras organizacionales planas. La Web 2.0 aparece como un concepto enfrentado al conjunto de tecnologías que existían a finales de los 90s basadas en sitios web, y se basa en la participación de los propios usuarios. Empresas del Fortune 500 –HP, IBM, Xerox, Cisco– las adoptan de inmediato, aunque no hay unanimidad sobre su utilidad real ni sobre cómo medirla. Esto se debe en parte a que no se entienden bien los factores que llevan a los empleados a adoptarlas, lo que ha llevado a fracasos en la implantación debido a la existencia de algunas barreras. Dada esta situación, y ante las ventajas teóricas que tienen estas herramientas de colaboración Web 2.0 para las empresas, los directivos de éstas y la comunidad científica muestran un interés creciente en conocer la respuesta a la pregunta: ¿cuáles son los factores que contribuyen a que los empleados de las empresas adopten estas herramientas Web 2.0 para colaborar? La respuesta a esta pregunta es compleja ya que se trata de herramientas relativamente nuevas en el contexto empresarial mediante las cuales se puede llevar a cabo la gestión del conocimiento en lugar del manejo de la información. El planteamiento que se ha llevado a cabo en este trabajo para dar respuesta a esta pregunta es la aplicación de los modelos de adopción tecnológica, que se basan en las percepciones de los individuos sobre diferentes aspectos relacionados con el uso de la tecnología. Bajo este enfoque, este trabajo tiene como objetivo principal el estudio de los factores que influyen en la adopción de blogs y wikis en empresas, mediante un modelo predictivo, teórico y unificado, de adopción tecnológica, con un planteamiento holístico a partir de la literatura de los modelos de adopción tecnológica y de las particularidades que presentan las herramientas bajo estudio y en el contexto especifico. Este modelo teórico permitirá determinar aquellos factores que predicen la intención de uso de las herramientas y el uso real de las mismas. El trabajo de investigación científica se estructura en cinco partes: introducción al tema de investigación, desarrollo del marco teórico, diseño del trabajo de investigación, análisis empírico, y elaboración de conclusiones. Desde el punto de vista de la estructura de la memoria de la tesis, las cinco partes mencionadas se desarrollan de forma secuencial a lo largo de siete capítulos, correspondiendo la primera parte al capítulo 1, la segunda a los capítulos 2 y 3, la tercera parte a los capítulos 4 y 5, la cuarta parte al capítulo 6, y la quinta y última parte al capítulo 7. El contenido del capítulo 1 se centra en el planteamiento del problema de investigación así como en los objetivos, principal y secundarios, que se pretenden cumplir a lo largo del trabajo. Así mismo, se expondrá el concepto de colaboración y su encaje con las herramientas colaborativas Web 2.0 que se plantean en la investigación y una introducción a los modelos de adopción tecnológica. A continuación se expone la justificación de la investigación, los objetivos de la misma y el plan de trabajo para su elaboración. Una vez introducido el tema de investigación, en el capítulo 2 se lleva a cabo una revisión de la evolución de los principales modelos de adopción tecnológica existentes (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), dando cuenta de sus fundamentos y factores empleados. Sobre la base de los modelos de adopción tecnológica expuestos en el capítulo 2, en el capítulo 3 se estudian los factores que se han expuesto en el capítulo 2 pero adaptados al contexto de las herramientas colaborativas Web 2.0. Con el fin de facilitar la comprensión del modelo final, los factores se agrupan en cuatro tipos: tecnológicos, de control, socio-normativos y otros específicos de las herramientas colaborativas. En el capítulo 4 se lleva a cabo la relación de los factores que son más apropiados para estudiar la adopción de las herramientas colaborativas y se define un modelo que especifica las relaciones entre los diferentes factores. Estas relaciones finalmente se convertirán en hipótesis de trabajo, y que habrá que contrastar mediante el estudio empírico. A lo largo del capítulo 5 se especifican las características del trabajo empírico que se lleva a cabo para contrastar las hipótesis que se habían enunciado en el capítulo 4. La naturaleza de la investigación es de carácter social, de tipo exploratorio, y se basa en un estudio empírico cuantitativo cuyo análisis se llevará a cabo mediante técnicas de análisis multivariante. En este capítulo se describe la construcción de las escalas del instrumento de medida, la metodología de recogida de datos, y posteriormente se presenta un análisis detallado de la población muestral, así como la comprobación de la existencia o no del sesgo atribuible al método de medida, lo que se denomina sesgo de método común (en inglés, Common Method Bias). El contenido del capítulo 6 corresponde al análisis de resultados, aunque previamente se expone la técnica estadística empleada, PLS-SEM, como herramienta de análisis multivariante con capacidad de análisis predictivo, así como la metodología empleada para validar el modelo de medida y el modelo estructural, los requisitos que debe cumplir la muestra, y los umbrales de los parámetros considerados. En la segunda parte del capítulo 6 se lleva a cabo el análisis empírico de los datos correspondientes a las dos muestras, una para blogs y otra para wikis, con el fin de validar las hipótesis de investigación planteadas en el capítulo 4. Finalmente, en el capítulo 7 se revisa el grado de cumplimiento de los objetivos planteados en el capítulo 1 y se presentan las contribuciones teóricas, metodológicas y prácticas derivadas del trabajo realizado. A continuación se exponen las conclusiones generales y detalladas por cada grupo de factores, así como las recomendaciones prácticas que se pueden extraer para orientar la implantación de estas herramientas en situaciones reales. Como parte final del capítulo se incluyen las limitaciones del estudio y se sugiere una serie de posibles líneas de trabajo futuras de interés, junto con los resultados de investigación parciales que se han obtenido durante el tiempo que ha durado la investigación. ABSTRACT In the last two decades, the relevance of knowledge acquisition and dissemination processes has been highlighted and consequently, the study of these processes and the implementation of the technologies that make them possible has generated growing interest in the scientific community. In order to ease and optimize knowledge acquisition and dissemination, hierarchical organizations have evolved to a more horizontal configuration with more agile net structures, decreasing the dependence of a centralized authority, and building team-working oriented organizations. At the same time, Web 2.0 collaboration tools such as blogs and wikis have quickly developed. These collaboration tools are characterized by a strong social component and can reach their full potential when they are deployed in horizontal organization structures. Web 2.0, based on user participation, arises as a concept to challenge the existing technologies of the 90’s which were based on websites. Fortune 500 companies – HP, IBM, Xerox, Cisco- adopted the concept immediately even though there was no unanimity about its real usefulness or how it could be measured. This is partly due to the fact that the factors that make the drivers for employees to adopt these tools are not properly understood, consequently leading to implementation failure due to the existence of certain barriers. Given this situation, and faced with theoretical advantages that these Web 2.0 collaboration tools seem to have for companies, managers and the scientific community are showing an increasing interest in answering the following question: Which factors contribute to the decision of the employees of a company to adopt the Web 2.0 tools for collaborative purposes? The answer is complex since these tools are relatively new in business environments. These tools allow us to move from an information Management approach to Knowledge Management. In order to answer this question, the chosen approach involves the application of technology adoption models, all of them based on the individual’s perception of the different aspects related to technology usage. From this perspective, this thesis’ main objective is to study the factors influencing the adoption of blogs and wikis in a company. This is done by using a unified and theoretical predictive model of technological adoption with a holistic approach that is based on literature of technological adoption models and the particularities that these tools presented under study and in a specific context. This theoretical model will allow us to determine the factors that predict the intended use of these tools and their real usage. The scientific research is structured in five parts: Introduction to the research subject, development of the theoretical framework, research work design, empirical analysis and drawing the final conclusions. This thesis develops the five aforementioned parts sequentially thorough seven chapters; part one (chapter one), part two (chapters two and three), part three (chapters four and five), parte four (chapters six) and finally part five (chapter seven). The first chapter is focused on the research problem statement and the objectives of the thesis, intended to be reached during the project. Likewise, the concept of collaboration and its link with the Web 2.0 collaborative tools is discussed as well as an introduction to the technology adoption models. Finally we explain the planning to carry out the research and get the proposed results. After introducing the research topic, the second chapter carries out a review of the evolution of the main existing technology adoption models (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), highlighting its foundations and factors used. Based on technology adoption models set out in chapter 2, the third chapter deals with the factors which have been discussed previously in chapter 2, but adapted to the context of Web 2.0 collaborative tools under study, blogs and wikis. In order to better understand the final model, the factors are grouped into four types: technological factors, control factors, social-normative factors and other specific factors related to the collaborative tools. The first part of chapter 4 covers the analysis of the factors which are more relevant to study the adoption of collaborative tools, and the second part proceeds with the theoretical model which specifies the relationship between the different factors taken into consideration. These relationships will become specific hypotheses that will be tested by the empirical study. Throughout chapter 5 we cover the characteristics of the empirical study used to test the research hypotheses which were set out in chapter 4. The nature of research is social, exploratory, and it is based on a quantitative empirical study whose analysis is carried out using multivariate analysis techniques. The second part of this chapter includes the description of the scales of the measuring instrument; the methodology for data gathering, the detailed analysis of the sample, and finally the existence of bias attributable to the measurement method, the "Bias Common Method" is checked. The first part of chapter 6 corresponds to the analysis of results. The statistical technique employed (PLS-SEM) is previously explained as a tool of multivariate analysis, capable of carrying out predictive analysis, and as the appropriate methodology used to validate the model in a two-stages analysis, the measurement model and the structural model. Futhermore, it is necessary to check the requirements to be met by the sample and the thresholds of the parameters taken into account. In the second part of chapter 6 an empirical analysis of the data is performed for the two samples, one for blogs and the other for wikis, in order to validate the research hypothesis proposed in chapter 4. Finally, in chapter 7 the fulfillment level of the objectives raised in chapter 1 is reviewed and the theoretical, methodological and practical conclusions derived from the results of the study are presented. Next, we cover the general conclusions, detailing for each group of factors including practical recommendations that can be drawn to guide implementation of these tools in real situations in companies. As a final part of the chapter the limitations of the study are included and a number of potential future researches suggested, along with research partial results which have been obtained thorough the research.
Resumo:
A tecnologização que a sociedade experimenta nas últimas décadas trouxe a profusão de máquinas informatizadas e seus sistemas de operação. Neste período a indústria desenvolveu sofisticados e caros softwares aplicativos proprietários para o pleno uso destas máquinas, o que colocou boa parte do mercado social nas mãos de poucas empresas multinacionais, entre elas, a Microsoft, e outras. Mas, o espírito libertário de membros das comunidades científicas e hackers promoveu o desenvolvimento do software livre e aberto, que pode ser usado como bem social mais amplo e, principalmente, evoluir no melhor do espírito colaborativo. O presente trabalho estuda os dois modelos de produção de software, os compara visando tornar evidentes as qualidades de cada um, seus custos, rendimentos e possibilidades de adoção. Projeta a possibilidade de que as habilitações da área da comunicação possam migrar para o modelo de software livre, dadas as plenas qualidades deste sistema, a radical redução de custos e as constatações que amplos segmentos da produção audiovisual os está adotando. Para tanto, compara as experiências aplicadas com ambos os sistemas em dois cursos de comunicação, em sua habilitação de Rádio e Televisão.(AU)
Resumo:
The Journal Retention and Needs Listing (JRNL) program: 1) allows libraries to expose lists of print journals for which they have made retention commitments; 2) express needs (or gaps) in their holdings; and 3) communicate offers to fill the gaps in other participating libraries’ holdings. Multiple library consortia and their member libraries use JRNL to facilitate communication between library staff to identify holding commitments, fill gaps, and guide deselection decisions. JRNL is commonly developed and governed by the participating consortia. Currently, those consortia are the Florida Academic Repository (FLARE), the Association of Southeastern Research Libraries (ASERL)/Washington Research Library Consortium (WRLC), and the Western Regional Storage Trust (WEST).
Resumo:
Camera traps have become a widely used technique for conducting biological inventories, generating a large number of database records of great interest. The main aim of this paper is to describe a new free and open source software (FOSS), developed to facilitate the management of camera-trapped data which originated from a protected Mediterranean area (SE Spain). In the last decade, some other useful alternatives have been proposed, but ours focuses especially on a collaborative undertaking and on the importance of spatial information underpinning common camera trap studies. This FOSS application, namely, “Camera Trap Manager” (CTM), has been designed to expedite the processing of pictures on the .NET platform. CTM has a very intuitive user interface, automatic extraction of some image metadata (date, time, moon phase, location, temperature, atmospheric pressure, among others), analytical (Geographical Information Systems, statistics, charts, among others), and reporting capabilities (ESRI Shapefiles, Microsoft Excel Spreadsheets, PDF reports, among others). Using this application, we have achieved a very simple management, fast analysis, and a significant reduction of costs. While we were able to classify an average of 55 pictures per hour manually, CTM has made it possible to process over 1000 photographs per hour, consequently retrieving a greater amount of data.
Resumo:
There have been many models developed by scientists to assist decision-makers in making socio-economic and environmental decisions. It is now recognised that there is a shift in the dominant paradigm to making decisions with stakeholders, rather than making decisions for stakeholders. Our paper investigates two case studies where group model building has been undertaken for maintaining biodiversity in Australia. The first case study focuses on preservation and management of green spaces and biodiversity in metropolitan Melbourne under the umbrella of the Melbourne 2030 planning strategy. A geographical information system is used to collate a number of spatial datasets encompassing a range of cultural and natural assets data layers including: existing open spaces, waterways, threatened fauna and flora, ecological vegetation covers, registered cultural heritage sites, and existing land parcel zoning. Group model building is incorporated into the study through eliciting weightings and ratings of importance for each datasets from urban planners to formulate different urban green system scenarios. The second case study focuses on modelling ecoregions from spatial datasets for the state of Queensland. The modelling combines collaborative expert knowledge and a vast amount of environmental data to build biogeographical classifications of regions. An information elicitation process is used to capture expert knowledge of ecoregions as geographical descriptions, and to transform this into prior probability distributions that characterise regions in terms of environmental variables. This prior information is combined with measured data on the environmental variables within a Bayesian modelling technique to produce the final classified regions. We describe how linked views between descriptive information, mapping and statistical plots are used to decide upon representative regions that satisfy a number of criteria for biodiversity and conservation. This paper discusses the advantages and problems encountered when undertaking group model building. Future research will extend the group model building approach to include interested individuals and community groups.
Resumo:
The CancerGrid consortium is developing open-standards cancer informatics to address the challenges posed by modern cancer clinical trials. This paper presents the service-oriented software paradigm implemented in CancerGrid to derive clinical trial information management systems for collaborative cancer research across multiple institutions. Our proposal is founded on a combination of a clinical trial (meta)model and WSRF (Web Services Resource Framework), and is currently being evaluated for use in early phase trials. Although primarily targeted at cancer research, our approach is readily applicable to other areas for which a similar information model is available.
Resumo:
The work described was carried out as part of a collaborative Alvey software engineering project (project number SE057). The project collaborators were the Inter-Disciplinary Higher Degrees Scheme of the University of Aston in Birmingham, BIS Applied Systems Ltd. (BIS) and the British Steel Corporation. The aim of the project was to investigate the potential application of knowledge-based systems (KBSs) to the design of commercial data processing (DP) systems. The work was primarily concerned with BIS's Structured Systems Design (SSD) methodology for DP systems development and how users of this methodology could be supported using KBS tools. The problems encountered by users of SSD are discussed and potential forms of computer-based support for inexpert designers are identified. The architecture for a support environment for SSD is proposed based on the integration of KBS and non-KBS tools for individual design tasks within SSD - The Intellipse system. The Intellipse system has two modes of operation - Advisor and Designer. The design, implementation and user-evaluation of Advisor are discussed. The results of a Designer feasibility study, the aim of which was to analyse major design tasks in SSD to assess their suitability for KBS support, are reported. The potential role of KBS tools in the domain of database design is discussed. The project involved extensive knowledge engineering sessions with expert DP systems designers. Some practical lessons in relation to KBS development are derived from this experience. The nature of the expertise possessed by expert designers is discussed. The need for operational KBSs to be built to the same standards as other commercial and industrial software is identified. A comparison between current KBS and conventional DP systems development is made. On the basis of this analysis, a structured development method for KBSs in proposed - the POLITE model. Some initial results of applying this method to KBS development are discussed. Several areas for further research and development are identified.
Resumo:
Objectives: To develop a decision support system (DSS), myGRaCE, that integrates service user (SU) and practitioner expertise about mental health and associated risks of suicide, self-harm, harm to others, self-neglect, and vulnerability. The intention is to help SUs assess and manage their own mental health collaboratively with practitioners. Methods: An iterative process involving interviews, focus groups, and agile software development with 115 SUs, to elicit and implement myGRaCE requirements. Results: Findings highlight shared understanding of mental health risk between SUs and practitioners that can be integrated within a single model. However, important differences were revealed in SUs' preferred process of assessing risks and safety, which are reflected in the distinctive interface, navigation, tool functionality and language developed for myGRaCE. A challenge was how to provide flexible access without overwhelming and confusing users. Conclusion: The methods show that practitioner expertise can be reformulated in a format that simultaneously captures SU expertise, to provide a tool highly valued by SUs. A stepped process adds necessary structure to the assessment, each step with its own feedback and guidance. Practice Implications: The GRiST web-based DSS (www.egrist.org) links and integrates myGRaCE self-assessments with GRiST practitioner assessments for supporting collaborative and self-managed healthcare.
Resumo:
The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^