998 resultados para web handling
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this paper is to discuss the importance of training teachers to use Web 2.0 in the classroom. Its intention was to fi nd out whether students are familiar with the main Web 2.0 resources and know how to exploit their potential in the development of teaching activities. In addition to the literature review, we developed a fi eld exploratory-descriptive research. The research was held in a private university located in the city of Bauru (São Paulo State, Brazil). We selected 213 students enrolled in “Supervised Training III” course, which is part of the teacher training curriculum available for students in the second year of undergraduate course. Results concluded that the students surveyed have access to computers and the Internet, are relatively skilled in handling the available tools and recognize the importance of including them in the teaching and learning process. The students demonstrate diffi culty using the web in a didactic manner, particularly the Web 2.0, which involves a focus on users and collaboration. Therefore, the article points to the need to rethink teacher training courses in order to include practical activities aimed at the use of technology as a teaching resource.
Resumo:
User interfaces are key properties of Business-to-Consumer (B2C) systems, and Web-based reservation systems are an important class of B2C systems. In this paper we show that these systems use a surprisingly broad spectrum of different approaches to handling temporal data in their Web inter faces. Based on these observations and on a literature analysis we develop a Morphological Box to present the main options for handling temporal data and give examples. The results indicate that the present state of developing and maintaining B2C systems has not been much influenced by modern Web Engi neering concepts and that there is considerable potential for improvement.
Resumo:
Das Web 2.0 eröffnet Wissenschaftlerinnen und Wissenschaftlern neue Möglichkeiten mit Wissen und Informationen umzugehen: Das Recherchieren von Informationen und Quellen, der Austausch von Wissen mit anderen, das Verwalten von Ressourcen und das Erstellen von eigenen Inhalten im Web ist einfach und kostengünstig möglich. Dieser Artikel thematisiert die Bedeutung des Web 2.0 für den Umgang mit Wissen und Informationen und zeigt auf, wie durch die Kooperation vieler Einzelner das Schaffen von neuem Wissen und von Innovationen möglich wird. Diskutiert werden der Einfluss des Web 2.0 auf die Wissenschaft und mögliche Vor- und Nachteile der Nutzung. Außerdem wird ein kurzer Überblick über Studien gegeben, die die Nutzung des Web 2.0 in der Gesamtbevölkerung untersuchen. Im empirischen Teil des Artikels werden Methode und Ergebnisse der Befragungsstudie „Wissenschaftliches Arbeiten im Web 2.0“ vorgestellt. Befragt wurden Nachwuchswissenschaftlerinnen und Nachwuchswissenschaftler in Deutschland zur Nutzung des Web 2.0 für die eigene wissenschaftliche Arbeit. Dabei zeigt sich, dass insbesondere die Wikipedia von einem Großteil der Befragten intensiv bis sehr intensiv für den Einstieg in die Recherche verwendet wird. Die aktive Nutzung des Web 2.0, z.B. durch das Schreiben eines eigenen Blogs oder dem Mitarbeiten bei der Online-Enzyklopädie Wikipedia ist bis jetzt noch gering. Viele Dienste sind unbekannt oder werden eher skeptisch beurteilt, der lokale Desktopcomputer wurde noch nicht vom Web als zentraler Speicherort abgelöst.
Resumo:
For the main part, electronic government (or e-government for short) aims to put digital public services at disposal for citizens, companies, and organizations. To that end, in particular, e-government comprises the application of Information and Communications Technology (ICT) to support government operations and provide better governmental services (Fraga, 2002) as possible with traditional means. Accordingly, e-government services go further as traditional governmental services and aim to fundamentally alter the processes in which public services are generated and delivered, after this manner transforming the entire spectrum of relationships of public bodies with its citizens, businesses and other government agencies (Leitner, 2003). To implement this transformation, one of the most important points is to inform the citizen, business, and/or other government agencies faithfully and in an accessible way. This allows all the partaking participants of governmental affairs for a transition from passive information access to active participation (Palvia and Sharma, 2007). In addition, by a corresponding handling of the participants' data, a personalization towards these participants may even be accomplished. For instance, by creating significant user profiles as a kind of participants' tailored knowledge structures, a better-quality governmental service may be provided (i.e., expressed by individualized governmental services). To create such knowledge structures, thus known information (e.g., a social security number) can be enriched by vague information that may be accurate to a certain degree only. Hence, fuzzy knowledge structures can be generated, which help improve governmental-participants relationship. The Web KnowARR framework (Portmann and Thiessen, 2013; Portmann and Pedrycz, 2014; Portmann and Kaltenrieder, 2014), which I introduce in my presentation, allows just all these participants to be automatically informed about changes of Web content regarding a- respective governmental action. The name Web KnowARR thereby stands for a self-acting entity (i.e. instantiated form the conceptual framework) that knows or apprehends the Web. In this talk, the frameworks respective three main components from artificial intelligence research (i.e. knowledge aggregation, representation, and reasoning), as well as its specific use in electronic government will be briefly introduced and discussed.
Resumo:
En este trabajo de fin de grado se llevará a cabo la elaboración de una aplicación web de gestión de gastos personales desde sus inicios, hasta su completo funcionamiento. Estas aplicaciones poseen un crecimiento emergente en el mercado, lo cual implica que la competencia entre ellas es muy elevada. Por ello el diseño de la aplicación que se va a desarrollar en este trabajo ha sido delicadamente cuidado. Se trata de un proceso minucioso el cual aportará a cada una de las partes de las que va a constar la aplicación características únicas que se plasmaran en funcionalidades para el usuario, como son: añadir sus propios gastos e ingresos mensuales, confeccionar gráficos de sus principales gastos, obtención de consejos de una fuente externa, etc… Estas funcionalidades de carácter único junto con otras más generalistas, como son el diseño gráfico en una amplia gama de colores, harán su manejo más fácil e intuitivo. Hay que destacar que para optimizar su uso, la aplicación tendrá la característica de ser responsive, es decir, será capaz de modificar su interfaz según el tamaño de la pantalla del dispositivo desde el que se acceda. Para su desarrollo, se va a utilizar una de las tecnologías más novedosas del mercado y siendo una de las más revolucionarias del momento, MEAN.JS. Con esta innovadora tecnología se creará la aplicación de gestión económica de gastos personales. Gracias al carácter innovador de aplicar esta tecnología novedosa, los retos que plantea este proyecto son muy variados, desde cómo estructurar las carpetas del proyecto y toda la parte de backend hasta como realizar el diseño de la parte de frontend. Además una vez finalizado su desarrollo y puesta en marcha se analizaran posibles mejoras para poder perfeccionarla en su totalidad. ABSTRACT In this final degree project will take out the development of a web application from its inception, until its full performance management. These applications have an emerging market growth, implying that competition between them is very high. Therefore the design of the application that will be developed in this work has been delicately care. It's a painstaking process which will provide each of the parties which will contain the application unique features that were translated into functionality for the user, such as: add their own expenses and monthly income, make graphs of your major expenses, obtaining advice from an external source, etc... These features of unique character together with other more general, such as graphic design in a wide range of colors, will make more easy and intuitive handling. It should be noted that to optimize its use, the application will have the characteristic of being responsive, will be able to modify your interface according to the size of the screen of the device from which are accessed. For its development, it is to use one of the newest technologies on the market and being one of the most revolutionary moment, MEAN. JS. The economic management of personal expenses application will be created with this innovative technology. Thanks to the innovative nature of applying this new technology, the challenges posed by this project are varied, from how to structure the folders of the project and all the backend part up to how to perform the part of frontend design. In addition once finished its development and commissioning possible improvements will analyze to be able to perfect it in its entirety.
Resumo:
The number of interoperable research infrastructures has increased significantly with the growing awareness of the efforts made by the Global Earth Observation System of Systems (GEOSS). One of the Societal Benefit Areas (SBA) that is benefiting most from GEOSS is biodiversity, given the costs of monitoring the environment and managing complex information, from space observations to species records including their genetic characteristics. But GEOSS goes beyond simple data sharing to encourage the publishing and combination of models, an approach which can ease the handling of complex multi-disciplinary questions. It is the purpose of this paper to illustrate these concepts by presenting eHabitat, a basic Web Processing Service (WPS) for computing the likelihood of finding ecosystems with equal properties to those specified by a user. When chained with other services providing data on climate change, eHabitat can be used for ecological forecasting and becomes a useful tool for decision-makers assessing different strategies when selecting new areas to protect. eHabitat can use virtually any kind of thematic data that can be considered as useful when defining ecosystems and their future persistence under different climatic or development scenarios. The paper will present the architecture and illustrate the concepts through case studies which forecast the impact of climate change on protected areas or on the ecological niche of an African bird.
Resumo:
The paper gives an overview about the ongoing FP6-IST INFRAWEBS project and describes the main layers and software components embedded in an application oriented realisation framework. An important part of INFRAWEBS is a Semantic Web Unit (SWU) – a collaboration platform and interoperable middleware for ontology-based handling and maintaining of SWS. The framework provides knowledge about a specific domain and relies on ontologies to structure and exchange this knowledge to semantic service development modules. INFRAWEBS Designer and Composer are sub-modules of SWU responsible for creating Semantic Web Services using Case-Based Reasoning approach. The Service Access Middleware (SAM) is responsible for building up the communication channels between users and various other modules. It serves as a generic middleware for deployment of Semantic Web Services. This software toolset provides a development framework for creating and maintaining the full-life-cycle of Semantic Web Services with specific application support.
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.
Resumo:
High dependability, availability and fault-tolerance are open problems in Service-Oriented Architecture (SOA). The possibility of generating software applications by integrating services from heterogeneous domains, in a reliable way, makes worthwhile to face the challenges inherent to this paradigm. In order to ensure quality in service compositions, some research efforts propose the adoption of verification techniques to identify and correct errors. In this context, exception handling is a powerful mechanism to increase SOA quality. Several research works are concerned with mechanisms for exception propagation on web services, implemented in many languages and frameworks. However, to the extent of our knowledge, no works found evaluates these mechanisms in SOA with regard to the .NET framework. The main contribution of this paper is to evaluate and to propose exception propagation mechanisms in SOA to applications developed within the .NET framework. In this direction, this work: (i)extends a previous study, showing the need to propose a solution to the exception propagation in SOA to applications developed in .NET, and (ii) show a solution, based in model obtained from the results found in (i) and that will be applied in real cases through of faults injections and AOP techniques.
Resumo:
Pancreatic β-cells are highly sensitive to suboptimal or excess nutrients, as occurs in protein-malnutrition and obesity. Taurine (Tau) improves insulin secretion in response to nutrients and depolarizing agents. Here, we assessed the expression and function of Cav and KATP channels in islets from malnourished mice fed on a high-fat diet (HFD) and supplemented with Tau. Weaned mice received a normal (C) or a low-protein diet (R) for 6 weeks. Half of each group were fed a HFD for 8 weeks without (CH, RH) or with 5% Tau since weaning (CHT, RHT). Isolated islets from R mice showed lower insulin release with glucose and depolarizing stimuli. In CH islets, insulin secretion was increased and this was associated with enhanced KATP inhibition and Cav activity. RH islets secreted less insulin at high K(+) concentration and showed enhanced KATP activity. Tau supplementation normalized K(+)-induced secretion and enhanced glucose-induced Ca(2+) influx in RHT islets. R islets presented lower Ca(2+) influx in response to tolbutamide, and higher protein content and activity of the Kir6.2 subunit of the KATP. Tau increased the protein content of the α1.2 subunit of the Cav channels and the SNARE proteins SNAP-25 and Synt-1 in CHT islets, whereas in RHT, Kir6.2 and Synt-1 proteins were increased. In conclusion, impaired islet function in R islets is related to higher content and activity of the KATP channels. Tau treatment enhanced RHT islet secretory capacity by improving the protein expression and inhibition of the KATP channels and enhancing Synt-1 islet content.
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
Stingless bees collect plant resins and make it into propolis, although they have a wider range of use for this material than do honey bees (Apis spp.). Plebeia spp. workers employ propolis mixed with wax (cerumen) for constructing and sealing nest structures, while they use viscous (sticky) propolis for defense by applying it onto their enemies. Isolated viscous propolis deposits are permanently maintained at the interior of their colonies, as also seen in other Meliponini species. Newly-emerged Plebeia emerina (Friese) workers were observed stuck to and unable to escape these viscous propolis stores. We examined the division of labor involved in propolis manipulation, by observing marked bees of known age in four colonies of P. emerina from southern Brazil. Activities on brood combs, the nest involucrum and food pots were observed from the first day of life of the marked bees. However, work on viscous propolis deposits did not begin until the 13th day of age and continued until the 56th day (maximum lifespan in our sample). Although worker bees begin to manipulate cerumen early, they seem to be unable to handle viscous propolis till they become older.
Resumo:
A utilização da web para a disponibilização de informações e serviços de órgãos governamentais para os cidadãos tem se tornado cada vez mais expressiva. Assim, a garantia de que esses conteúdos e serviços possam ser acessíveis a qualquer cidadão é imprescindível, independentemente de necessidades especiais ou de quaisquer outras barreiras. No Brasil, o Decreto-Lei nº5.296/2004 determinou que todos os órgãos governamentais deveriam adaptar seus sítios na web de acordo com critérios de acessibilidade até dezembro de 2005. Com o objetivo de verificar a evolução da acessibilidade ao longo dos anos e como foi o impacto dessa legislação, este artigo analisa a acessibilidade dos sítios dos governos estaduais brasileiros por meio de amostras coletadas entre 1996 e 2007. Foram efetuadas análises por meio de métricas, obtidas por avaliações com ferramentas automáticas. Os resultados indicam que a legislação teve pouco impacto para a melhoria real da acessibilidade dos sítios no período indicado, com uma melhora somente em 2007. Verifica-se que se faz necessário adotar políticas públicas mais efetivas para que as pessoas com necessidades especiais tenham os seus direitos para acesso a informações e aos serviços públicos na web assegurados mais amplamente.