914 resultados para GUI legacy Windows Form web-application


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Web services are now a key ingredient of software services offered by software enterprises. Many standardized web services are now available as commodity offerings from web service providers. An important problem for a web service requester is the web service composition problem which involves selecting the right mix of web service offerings to execute an end-to-end business process. Web service offerings are now available in bundled form as composite web services and more recently, volume discounts are also on offer, based on the number of executions of web services requested. In this paper, we develop efficient algorithms for the web service composition problem in the presence of composite web service offerings and volume discounts. We model this problem as a combinatorial auction with volume discounts. We first develop efficient polynomial time algorithms when the end-to-end service involves a linear workflow of web services. Next we develop efficient polynomial time algorithms when the end-to-end service involves a tree workflow of web services.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

(Document pdf contains 193 pages) Executive Summary (pdf, < 0.1 Mb) 1. Introduction (pdf, 0.2 Mb) 1.1 Data sharing, international boundaries and large marine ecosystems 2. Objectives (pdf, 0.3 Mb) 3. Background (pdf, < 0.1 Mb) 3.1 North Pacific Ecosystem Metadatabase 3.2 First federation effort: NPEM and the Korea Oceanographic Data Center 3.2 Continuing effort: Adding Japan’s Marine Information Research Center 4. Metadata Standards (pdf, < 0.1 Mb) 4.1 Directory Interchange Format 4.2 Ecological Metadata Language 4.3 Dublin Core 4.3.1. Elements of DC 4.4 Federal Geographic Data Committee 4.5 The ISO 19115 Metadata Standard 4.6 Metadata stylesheets 4.7 Crosswalks 4.8 Tools for creating metadata 5. Communication Protocols (pdf, < 0.1 Mb) 5.1 Z39.50 5.1.1. What does Z39.50 do? 5.1.2. Isite 6. Clearinghouses (pdf, < 0.1 Mb) 7. Methodology (pdf, 0.2 Mb) 7.1 FGDC metadata 7.1.1. Main sections 7.1.2. Supporting sections 7.1.3. Metadata validation 7.2 Getting a copy of Isite 7.3 NSDI Clearinghouse 8. Server Configuration and Technical Issues (pdf, 0.4 Mb) 8.1 Hardware recommendations 8.2 Operating system – Red Hat Linux Fedora 8.3 Web services – Apache HTTP Server version 2.2.3 8.4 Create and validate FGDC-compliant Metadata in XML format 8.5 Obtaining, installing and configuring Isite for UNIX/Linux 8.5.1. Download the appropriate Isite software 8.5.2. Untar the file 8.5.3. Name your database 8.5.4. The zserver.ini file 8.5.5. The sapi.ini file 8.5.6. Indexing metadata 8.5.7. Start the Clearinghouse Server process 8.5.8. Testing the zserver installation 8.6 Registering with NSDI Clearinghouse 8.7 Security issues 9. Search Tutorial and Examples (pdf, 1 Mb) 9.1 Legacy NSDI Clearinghouse search interface 9.2 New GeoNetwork search interface 10. Challenges (pdf, < 0.1 Mb) 11. Emerging Standards (pdf, < 0.1 Mb) 12. Future Activity (pdf, < 0.1 Mb) 13. Acknowledgments (pdf, < 0.1 Mb) 14. References (pdf, < 0.1 Mb) 15. Acronyms (pdf, < 0.1 Mb) 16. Appendices 16.1. KODC-NPEM meeting agendas and minutes (pdf, < 0.1 Mb) 16.1.1. Seattle meeting agenda, August 22–23, 2005 16.1.2. Seattle meeting minutes, August 22–23, 2005 16.1.3. Busan meeting agenda, October 10–11, 2005 16.1.4. Busan meeting minutes, October 10–11, 2005 16.2. MIRC-NPEM meeting agendas and minutes (pdf, < 0.1 Mb) 16.2.1. Seattle Meeting agenda, August 14-15, 2006 16.2.2. Seattle meeting minutes, August 14–15, 2006 16.2.3. Tokyo meeting agenda, October 19–20, 2006 16.2.4. Tokyo, meeting minutes, October 19–20, 2006 16.3. XML stylesheet conversion crosswalks (pdf, < 0.1 Mb) 16.3.1. FGDCI to DIF stylesheet converter 16.3.2. DIF to FGDCI stylesheet converter 16.3.3. String-modified stylesheet 16.4. FGDC Metadata Standard (pdf, 0.1 Mb) 16.4.1. Overall structure 16.4.2. Section 1: Identification information 16.4.3. Section 2: Data quality information 16.4.4. Section 3: Spatial data organization information 16.4.5. Section 4: Spatial reference information 16.4.6. Section 5: Entity and attribute information 16.4.7. Section 6: Distribution information 16.4.8. Section 7: Metadata reference information 16.4.9. Sections 8, 9 and 10: Citation information, time period information, and contact information 16.5. Images of the Isite server directory structure and the files contained in each subdirectory after Isite installation (pdf, 0.2 Mb) 16.6 Listing of NPEM’s Isite configuration files (pdf, < 0.1 Mb) 16.6.1. zserver.ini 16.6.2. sapi.ini 16.7 Java program to extract records from the NPEM metadatabase and write one XML file for each record (pdf, < 0.1 Mb) 16.8 Java program to execute the metadata extraction program (pdf, < 0.1 Mb) A1 Addendum 1: Instructions for Isite for Windows (pdf, 0.6 Mb) A2 Addendum 2: Instructions for Isite for Windows ADHOST (pdf, 0.3 Mb)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Predicting and averting the spread of invasive species is a core focus of resource managers in all ecosystems. Patterns of invasion are difficult to forecast, compounded by a lack of user-friendly species distribution model (SDM) tools to help managers focus control efforts. This paper presents a web-based cellular automata hybrid modeling tool developed to study the invasion pattern of lionfish (Pterois volitans/miles) in the western Atlantic and is a natural extension our previous lionfish study. Our goal is to make publically available this hybrid SDM tool and demonstrate both a test case (P. volitans/miles) and a use case (Caulerpa taxifolia). The software derived from the model, titled Invasionsoft, is unique in its ability to examine multiple default or user-defined parameters, their relation to invasion patterns, and is presented in a rich web browser-based GUI with integrated results viewer. The beta version is not species-specific and includes a default parameter set that is tailored to the marine habitat. Invasionsoft is provided as copyright protected freeware at http://www.invasionsoft.com.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Web services can be seen as a newly emerging research area for Service-oriented Computing and their implementation in Service-oriented Architectures. Web services are self-contained, self-describing modular applications or components providing services. Web services may be dynamically aggregated, composed, and enacted as Web services Workflows. This requires frameworks and interaction protocols for their co-ordination and transaction support. In a Service-oriented Computing setting, transactions are more complex, involve multiple parties (roles), span many organizations, and may be long-running, consisting of a highly decentralized service partner and performed by autonomous entities. A Service-oriented Transaction Model has to provide comprehensive support for long-running propositions including negotiations, conversations, commitments, contracts, tracking, payments, and exception handling. Current transaction models and mechanisms including their protocols and primitives do not sufficiently cater for quality-aware and long running transactions comprising loosely-coupled (federated) service partners and resources. Web services transactions require co-ordination behavior provided by a traditional transaction mechanism to control the operations and outcome of an application. Furthermore, Web services transactions require the capability to handle the co-ordination of processing outcomes or results from multiple services in a more flexible manner. This requires more relaxed forms of transactions—those that do not strictly have to abide by the ACID properties—such as loosely-coupled collaboration and workflows. Furthermore, there is a need to group Web services into applications that require some form of correlation, but do not necessarily require transactional behavior. The purpose of this paper is to provide a state-of-the-art review and overview of some proposed standards surrounding Web services composition, co-ordination, and transaction. In particular the Business Process Execution Language for Web services (BPEL4WS), its co-ordination, and transaction frameworks (WS-Co-ordination and WS-Transaction) are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

R. Jensen and Q. Shen, 'Fuzzy-Rough Attribute Reduction with Application to Web Categorization,' Fuzzy Sets and Systems, vol. 141, no. 3, pp. 469-485, 2004.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper tells the story of how a set of university lectures developed during the last six years. The idea is to show how (1) content, (2) communication and (3) assessment have evolved in steps which are named “generations of web learning”. The reader is offered a stepwise description of both didactic foundations of university lectures and practical implementation on a widely available web platform. The relative weight of directive elements has gradually decreased through the “three generations”, whereas characteristics of self-responsibility and self-guided learning have gained in importance. -Content was in early times presented and expected to be learned but in later phases expected to be constructed for examples of case studies. -Communication meant in early phases to deliver assignments to the lecturer but later on to form teams, exchange standpoints and review mutually. -Assessment initially consisted in marks invented and added up by the lecturer but was later enriched by peer review, mutual grading and voting procedures. How much “added value” can the web provide for teaching, training and learning? Six years of experience suggest: mainly insofar as new (collaborative and selfdirected) didactic scenarios are implemented! (DIPF/Orig.)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Continuous Plankton Recorder (CPR) survey provides a unique multi- decadal dataset on the abundance of plankton in the North Sea and North Atlantic and is one of only a few monitoring programmes operating at a large spatio- temporal scale. The results of all samples analysed from the survey since 1946 are stored on an Access Database at the Sir Alister Hardy Foundation for Ocean Science (SAHFOS) in Plymouth. The database is large, containing more than two million records (~80 million data points, if zero results are added) for more than 450 taxonomic entities. An open data policy is operated by SAHFOS. However, the data are not on-line and so access by scientists and others wishing to use the results is not interactive. Requests for data are dealt with by the Database Manager. To facilitate access to the data from the North Sea, which is an area of high research interest, a selected set of data for key phytoplankton and zooplankton species has been processed in a form that makes them readily available on CD for research and other applications. A set of MATLAB tools has been developed to provide an interpolated spatio-temporal description of plankton sampled by the CPR in the North Sea, as well as easy and fast access to users in the form of a browser. Using geostatistical techniques, plankton abundance values have been interpolated on a regular grid covering the North Sea. The grid is established on centres of 1 degree longitude x 0.5 degree latitude (~32 x 30 nautical miles). Based on a monthly temporal resolution over a fifty-year period (1948-1997), 600 distribution maps have been produced for 54 zooplankton species, and 480 distribution maps for 57 phytoplankton species over the shorter period 1958-1997. The gridded database has been developed in a user-friendly form and incorporates, as a package on a CD, a set of options for visualisation and interpretation, including the facility to plot maps for selected species by month, year, groups of months or years, long-term means or as time series and contour plots. This study constitutes the first application of an easily accessed and interactive gridded database of plankton abundance in the North Sea. As a further development the MATLAB browser is being converted to a user- friendly Windows-compatible format (WinCPR) for release on CD and via the Web in 2003.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A Web-service based approach is presented which enables geographically dispersed users to share software resources over the Internet. A service-oriented software sharing system has been developed, which consists of shared applications, client applications and three types of services: application proxy service, proxy implementation service and application manager service. With the aids of the services, the client applications interact with the shared applications to implement a software sharing task. The approach satisfies the requirements of copyright protection and reuse of legacy codes. In this paper, the role of Web-services and the architecture of the system are presented first, followed by a case study to illustrate the approach developed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, the support for legacy application, which is one of the important advantages of Grid computing, is presented. The ability to reuse existing codes/applications in combination with other Web/Internet technologies, such as Java, makes Grid computing a good choice for developers to wrap existing applications behind Intranet or the Internet. The approach developed can be used for migrating legacy applications into Grid Services, which speeds up the popularization of Grid technology. The approach is illustrated using a case study with detailed description of its implementation step by step. Globus Toolkit is utilized to develop the system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Repeat proteins have become increasingly important due to their capability to bind to almost any proteins and the potential as alternative therapy to monoclonal antibodies. In the past decade repeat proteins have been designed to mediate specific protein-protein interactions. The tetratricopeptide and ankyrin repeat proteins are two classes of helical repeat proteins that form different binding pockets to accommodate various partners. It is important to understand the factors that define folding and stability of repeat proteins in order to prioritize the most stable designed repeat proteins to further explore their potential binding affinities. Here we developed distance-dependant statistical potentials using two classes of alpha-helical repeat proteins, tetratricopeptide and ankyrin repeat proteins respectively, and evaluated their efficiency in predicting the stability of repeat proteins. We demonstrated that the repeat-specific statistical potentials based on these two classes of repeat proteins showed paramount accuracy compared with non-specific statistical potentials in: 1) discriminate correct vs. incorrect models 2) rank the stability of designed repeat proteins. In particular, the statistical scores correlate closely with the equilibrium unfolding free energies of repeat proteins and therefore would serve as a novel tool in quickly prioritizing the designed repeat proteins with high stability. StaRProtein web server was developed for predicting the stability of repeat proteins.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

O desenvolvimento das ferramentas Web 2.0 tem estado a impulsionar mudanças significativas no modo de interação entre os utilizadores da Internet. No âmbito educacional, estas ferramentas podem enriquecer as práticas pedagógicas e promover ações que envolvam a participação ativa, a colaboração, a cooperação e a partilha de saberes. Num contexto de ensino e aprendizagem em que se assume que, os estudantes de pós-graduação em Ciências de Educação apresentam deficiências ao nível do pensamento crítico, a utilização pedagógica das ferramentas Web 2.0 pode ser, deste modo, considerada como um fator promotor do pensamento crítico. Nesta linha de pensamento, o presente estudo surge com o objetivo principal de contribuir para uma compreensão mais profunda relativamente à utilização de tecnologias Web 2.0 como um fator potencial de promoção do desenvolvimento do pensamento crítico na Universidade Eduardo Mondlane (UEM) através da aplicação e análise de algumas estratégias pedagógicas baseadas em blogs e wikis. Em função do objetivo do estudo, a parte empírica foi conduzida na forma de uma investigação-ação e compreendeu dois ciclos. A seleção dos participantes foi feita por conveniência. O 1º ciclo de investigação incidiu sobre catorze participantes matriculados no ano académico de 2009/2010 para o módulo Desenvolvimento Profissional e Aprendizagem ao Longo da Vida, lecionado na fase de especialização de mestrado em Educação de Adultos. O 2º ciclo compreendeu dezoito participantes também inscritos para o mesmo módulo, mas no ano académico de 2010/2011. A recolha de dados foi feita por meio de observação, inquérito por entrevista do tipo semiestruturada, diário de bordo, inquérito por questionário, ensaios argumentativos e pesquisa documental. Um modelo de análise adaptado a partir da tipologia de pensamento crítico de Ennis (1987) foi utilizado na recolha e análise dos dados. Uma análise interpretativa dos dados foi efetuada com ajuda do software Nvivo8. Os resultados do estudo demonstraram que é possível promover as capacidades e disposições de pensamento crítico nos estudantes mediante a utilização de algumas estratégias pedagógicas que recorrem a ferramentas Web 2.0, como sejam o blog de discussão, os blogs de grupos e a wiki da turma. Apesar das diversas dificuldades enfrentadas pelos estudantes no desenrolar do módulo, os participantes de ambos os ciclos reconhecem que as ferramentas Web 2.0 têm um grande potencial para a promoção do pensamento crítico e que a sua aplicação é fortemente recomendável para o processo de ensino e aprendizagem. O estudo concluiu também que o modelo de análise adaptado de Ennis (1987) que orientou a pesquisa revelou ser fundamental na observância da ocorrência de capacidades e disposições de pensamento crítico nos blogs de discussão, blogs de grupos e na wiki da turma.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Das Ziel dieser Arbeit ist es, ein Konzept für eine Darstellung der Personennamendatei(PND) in den Sprachen Resource Description Framework (RDF), Resource DescriptionFramework Schema Language (RDFS) und Web Ontology Language (OWL) zu entwickeln. Der Prämisse des Semantic Web folgend, Daten sowohl in menschenverständlicher als auch in maschinell verarbeitbarer Form darzustellen und abzulegen, wird eine Struktur für Personendaten geschaffen. Dabei wird von der bestehenden Daten- und Struktursituation im Pica-Format ausgegangen. Die Erweiterbarkeit und Anpassbarkeit des Modells im Hinblick auf zukünftige, im Moment gegebenenfalls noch nicht absehbare Anwendungen und Strukurveränderungen, muss aber darüber hinaus gewährleistet sein. Die Modellierung orientiert sich an bestehenden Standards wie Dublin Core, Friend Of A Friend (FOAF), Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD) und Resource Description and Access (RDA).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal