971 resultados para Web-Solution
Resumo:
A partir da Lei n. 6.938 de 31 de agosto de 1981, que constituiu o Sistema Nacional do Meio Ambiente, criou-se o Conselho Nacional do Meio Ambiente e instituiu-se o Cadastro Técnico Federal de Atividades e Instrumentos de Defesa Ambiental, a gestão ambiental pública ganhou um espaço cada vez maior nas administrações municipais, com a implementação de instrumentos de gestão ambiental propiciando aos municípios a possibilidade de ações efetivas que contribuam para uma melhor qualidade de vida a população. Esse trabalho propõe a criação de um método de classificação municipal que indicará qual o nível da gestão ambiental do município. Verificando o número de instrumentos de gestão ambiental constituído e o número de problemas ambientais ocorridos em cada município na visão do gestor local nos anos de 2006/2008. E ainda qual a influência do IDH tanto na implementação de tais instrumentos de gestão ambiental, como nas ocorrências dos problemas ambientais. Tal classificação tem a intenção de verificar se o município encontra-se bem aparelhado no que se refere à gestão ambiental, auxiliando para futuras decisões nas ações da política ambiental local. O foco desse trabalho serão os municípios dos estados de Minas Gerais, Piauí e Rio de Janeiro. Os resultados serão processados via o software MATLAB utilizando lógica nebulosa (fuzzy) e apresentados em um website utilizando as linguagens de programação JSP, HTML, JavaScript e esse website armazenado em um servidor TomCat e tais resultados serão apresentados nas formas de valores alfanuméricos em tabelas e espaciais através de mapas temáticos em uma solução sig-web. Os dados estão armazenados em um Sistema Gerenciador de Banco de Dados PostgreSQL com sua extensão espacial PostGIS, e o acesso aos mapas será feito através do servidor de mapas MapServer.
Resumo:
Contemporary web-based software solutions are usually composed of many interoperating applications. Classical approach is the different applications of the solution to be created inside one technology/platform, e.g. Java-technology, .NET-technology, etc. Wide spread technologies/platforms practically discourage (and sometime consciously make impossible) the cooperation with elements of the concurrent technologies/platforms. To make possible the usage of attractive features of one technology/platform in another technology/platform some “cross-technology” approach is necessary. In the paper is discussed the possibility to combine two existing instruments – interoperability protocols and “lifting” of procedures – in order to obtain such cross-technology approach.
Resumo:
The University of São Paulo has been experiencing the increase in contents in electronic and digital formats, distributed by different suppliers and hosted remotely or in clouds, and is faced with the also increasing difficulties related to facilitating access to this digital collection by its users besides coexisting with the traditional world of physical collections. A possible solution was identified in the new generation of systems called Web Scale Discovery, which allow better management, data integration and agility of search. Aiming to identify if and how such a system would meet the USP demand and expectation and, in case it does, to identify what the analysis criteria of such a tool would be, an analytical study with an essentially documental base was structured, as from a revision of the literature and from data available in official websites and of libraries using this kind of resources. The conceptual base of the study was defined after the identification of software assessment methods already available, generating a standard with 40 analysis criteria, from details on the unique access interface to information contents, web 2.0 characteristics, intuitive interface, facet navigation, among others. The details of the studies conducted into four of the major systems currently available in this software category are presented, providing subsidies for the decision-making of other libraries interested in such systems.
Resumo:
Nowadays video and web conferencing systems have become effective tools for communication and collaboration inside organizations. However, although these systems have evolved and now provide very nice features (e.g. sharing multimedia and documents), they are still too focused on the moment the meeting takes place. The existing systems provide very few facilities to organize the meeting and they do not take advantage of the possibilities the generated content offers once the meeting is finished. In this paper, we analyze the life cycle of a web conference and how existing systems monitor these conferences. Finally we present our solution, based on our know-how in videoconference management and our experience with these existing systems.
Resumo:
Decision support systems (DSS) have evolved rapidly during the last decade from stand alone or limited networked solutions to online participatory solutions. One of the major enablers of this change is the fastest growing areas of geographical information system (GIS) technology development that relates to the use of the Internet as a means to access, display, and analyze geospatial data remotely. World-wide many federal, state, and particularly local governments are designing to facilitate data sharing using interactive Internet map servers. This new generation DSS or planning support systems (PSS), interactive Internet map server, is the solution for delivering dynamic maps and GIS data and services via the world-wide Web, and providing public participatory GIS (PPGIS) opportunities to a wider community (Carver, 2001; Jankowski & Nyerges, 2001). It provides a highly scalable framework for GIS Web publishing, Web-based public participatory GIS (WPPGIS), which meets the needs of corporate intranets and demands of worldwide Internet access (Craig, 2002). The establishment of WPPGIS provides spatial data access through a support centre or a GIS portal to facilitate efficient access to and sharing of related geospatial data (Yigitcanlar, Baum, & Stimson, 2003). As more and more public and private entities adopt WPPGIS technology, the importance and complexity of facilitating geospatial data sharing is growing rapidly (Carver, 2003). Therefore, this article focuses on the online public participation dimension of the GIS technology. The article provides an overview of recent literature on GIS and WPPGIS, and includes a discussion on the potential use of these technologies in providing a democratic platform for the public in decision-making.
Resumo:
The interoperable and loosely-coupled web services architecture, while beneficial, can be resource-intensive, and is thus susceptible to denial of service (DoS) attacks in which an attacker can use a relatively insignificant amount of resources to exhaust the computational resources of a web service. We investigate the effectiveness of defending web services from DoS attacks using client puzzles, a cryptographic countermeasure which provides a form of gradual authentication by requiring the client to solve some computationally difficult problems before access is granted. In particular, we describe a mechanism for integrating a hash-based puzzle into existing web services frameworks and analyze the effectiveness of the countermeasure using a variety of scenarios on a network testbed. Client puzzles are an effective defence against flooding attacks. They can also mitigate certain types of semantic-based attacks, although they may not be the optimal solution.
Resumo:
Collecting regular personal reflections from first year teachers in rural and remote schools is challenging as they are busily absorbed in their practice, and separated from each other and the researchers by thousands of kilometres. In response, an innovative web-based solution was designed to both collect data and be a responsive support system for early career teachers as they came to terms with their new professional identities within rural and remote school settings. Using an emailed link to a web-based application named goingok.com, the participants are charting their first year plotlines using a sliding scale from ‘distressed’, ‘ok’ to ‘soaring’ and describing their self-assessment in short descriptive posts. These reflections are visible to the participants as a developing online journal, while the collections of de-identified developing plotlines are visible to the research team, alongside numerical data. This paper explores important aspects of the design process, together with the challenges and opportunities encountered in its implementation. A number of the key considerations for choosing to develop a web application for data collection are initially identified, and the resultant application features and scope are then examined. Examples are then provided about how a responsive software development approach can be part of a supportive feedback loop for participants while being an effective data collection process. Opportunities for further development are also suggested with projected implications for future research.
Resumo:
Clustering is an important technique in organising and categorising web scale documents. The main challenges faced in clustering the billions of documents available on the web are the processing power required and the sheer size of the datasets available. More importantly, it is nigh impossible to generate the labels for a general web document collection containing billions of documents and a vast taxonomy of topics. However, document clusters are most commonly evaluated by comparison to a ground truth set of labels for documents. This paper presents a clustering and labeling solution where the Wikipedia is clustered and hundreds of millions of web documents in ClueWeb12 are mapped on to those clusters. This solution is based on the assumption that the Wikipedia contains such a wide range of diverse topics that it represents a small scale web. We found that it was possible to perform the web scale document clustering and labeling process on one desktop computer under a couple of days for the Wikipedia clustering solution containing about 1000 clusters. It takes longer to execute a solution with finer granularity clusters such as 10,000 or 50,000. These results were evaluated using a set of external data.
Resumo:
Business processes and application functionality are becoming available as internal web services inside enterprise boundaries as well as becoming available as commercial web services from enterprise solution vendors and web services marketplaces. Typically there are multiple web service providers offering services capable of fulfilling a particular functionality, although with different Quality of Service (QoS). Dynamic creation of business processes requires composing an appropriate set of web services that best suit the current need. This paper presents a novel combinatorial auction approach to QoS aware dynamic web services composition. Such an approach would enable not only stand-alone web services but also composite web services to be a part of a business process. The combinatorial auction leads to an integer programming formulation for the web services composition problem. An important feature of the model is the incorporation of service level agreements. We describe a software tool QWESC for QoS-aware web services composition based on the proposed approach.
Resumo:
Diferentes organizações públicas e privadas coletam e disponibilizam uma massa de dados sobre a realidade sócio-econômica das diferentes nações. Há hoje, da parte do governo brasileiro, um interesse manifesto de divulgar uma gama diferenciada de informações para os mais diversos perfis de usuários. Persiste, contudo, uma série de limitações para uma divulgação mais massiva e democrática, entre elas, a heterogeneidade das fontes de dados, sua dispersão e formato de apresentação pouco amigável. Devido à complexidade inerente à informação geográfica envolvida, que produz incompatibilidade em vários níveis, o intercâmbio de dados em sistemas de informação geográfica não é problema trivial. Para aplicações desenvolvidas para a Web, uma solução são os Web Services que permitem que novas aplicações possam interagir com aquelas que já existem e que sistemas desenvolvidos em plataformas diferentes sejam compatíveis. Neste sentido, o objetivo do trabalho é mostrar as possibilidades de construção de portais usando software livre, a tecnologia dos Web Services e os padrões do Open Geospatial Consortium (OGC) para a disseminação de dados espaciais. Visando avaliar e testar as tecnologias selecionadas e comprovar sua efetividade foi desenvolvido um exemplo de portal de dados sócio-econômicos, compreendendo informações de um servidor local e de servidores remotos. As contribuições do trabalho são a disponibilização de mapas dinâmicos, a geração de mapas através da composição de mapas disponibilizados em servidores remotos e local e o uso do padrão OGC WMC. Analisando o protótipo de portal construído, verifica-se, contudo, que a localização e requisição de Web Services não são tarefas fáceis para um usuário típico da Internet. Nesta direção, os trabalhos futuros no domínio dos portais de informação geográfica poderiam adotar a tecnologia Representational State Transfer (REST).
Resumo:
With long-term marine surveys and research, and especially with the development of new marine environment monitoring technologies, prodigious amounts of complex marine environmental data are generated, and continuously increase rapidly. Features of these data include massive volume, widespread distribution, multiple-sources, heterogeneous, multi-dimensional and dynamic in structure and time. The present study recommends an integrative visualization solution for these data, to enhance the visual display of data and data archives, and to develop a joint use of these data distributed among different organizations or communities. This study also analyses the web services technologies and defines the concept of the marine information gird, then focuses on the spatiotemporal visualization method and proposes a process-oriented spatiotemporal visualization method. We discuss how marine environmental data can be organized based on the spatiotemporal visualization method, and how organized data are represented for use with web services and stored in a reusable fashion. In addition, we provide an original visualization architecture that is integrative and based on the explored technologies. In the end, we propose a prototype system of marine environmental data of the South China Sea for visualizations of Argo floats, sea surface temperature fields, sea current fields, salinity, in-situ investigation data, and ocean stations. An integration visualization architecture is illustrated on the prototype system, which highlights the process-oriented temporal visualization method and demonstrates the benefit of the architecture and the methods described in this study.
Resumo:
This paper investigates how people return to information in a dynamic information environment. For example, a person might want to return to Web content via a link encountered earlier on a Web page, only to learn that the link has since been removed. Changes can benefit users by providing new information, but they hinder returning to previously viewed information. The observational study presented here analyzed instances, collected via a Web search, where people expressed difficulty re-finding information because of changes to the information or its environment. A number of interesting observations arose from this analysis, including that the path originally taken to get to the information target appeared important in its re-retrieval, whereas, surprisingly, the temporal aspects of when the information was seen before were not. While people expressed frustration when problems arose, an explanation of why the change had occurred was often sufficient to allay that frustration, even in the absence of a solution. The implications of these observations for systems that support re-finding in dynamic environments are discussed.
Resumo:
We consider the behaviour of a set of services in a stressed web environment where performance patterns may be difficult to predict. In stressed environments the performances of some providers may degrade while the performances of others, with elastic resources, may improve. The allocation of web-based providers to users (brokering) is modelled by a strategic non-cooperative angel-daemon game with risk profiles. A risk profile specifies a bound on the number of unreliable service providers within an environment without identifying the names of these providers. Risk profiles offer a means of analysing the behaviour of broker agents which allocate service providers to users. A Nash equilibrium is a fixed point of such a game in which no user can locally improve their choice of provider – thus, a Nash equilibrium is a viable solution to the provider/user allocation problem. Angel daemon games provide a means of reasoning about stressed environments and offer the possibility of designing brokers using risk profiles and Nash equilibria.
Resumo:
Purpose
– Traditionally, most studies focus on institutionalized management-driven actors to understand technology management innovation. The purpose of this paper is to argue that there is a need for research to study the nature and role of dissident non-institutionalized actors’ (i.e. outsourced web designers and rapid application software developers). The authors propose that through online social knowledge sharing, non-institutionalized actors’ solution-finding tensions enable technology management innovation.
Design/methodology/approach
– A synthesis of the literature and an analysis of the data (21 interviews) provided insights in three areas of solution-finding tensions enabling management innovation. The authors frame the analysis on the peripherally deviant work and the nature of the ways that dissident non-institutionalized actors deviate from their clients (understood as the firm) original contracted objectives.
Findings
– The findings provide insights into the productive role of solution-finding tensions in enabling opportunities for management service innovation. Furthermore, deviant practices that leverage non-institutionalized actors’ online social knowledge to fulfill customers’ requirements are not interpreted negatively, but as a positive willingness to proactively explore alternative paths.
Research limitations/implications
– The findings demonstrate the importance of dissident non-institutionalized actors in technology management innovation. However, this work is based on a single country (USA) and additional research is needed to validate and generalize the findings in other cultural and institutional settings.
Originality/value
– This paper provides new insights into the perceptions of dissident non-institutionalized actors in the practice of IT managerial decision making. The work departs from, but also extends, the previous literature, demonstrating that peripherally deviant work in solution-finding practice creates tensions, enabling management innovation between IT providers and users.
Resumo:
Textual problem-solution repositories are available today in
various forms, most commonly as problem-solution pairs from community
question answering systems. Modern search engines that operate on
the web can suggest possible completions in real-time for users as they
type in queries. We study the problem of generating intelligent query
suggestions for users of customized search systems that enable querying
over problem-solution repositories. Due to the small scale and specialized
nature of such systems, we often do not have the luxury of depending on
query logs for finding query suggestions. We propose a retrieval model
for generating query suggestions for search on a set of problem solution
pairs. We harness the problem solution partition inherent in such
repositories to improve upon traditional query suggestion mechanisms
designed for systems that search over general textual corpora. We evaluate
our technique over real problem-solution datasets and illustrate that
our technique provides large and statistically significant