847 resultados para Combined Web crippling and Flange Crushing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considering the context that integrates the internet and the information, the present paper has aimed to analyze the content from the juridical portal named Migalhas, more specifically the daily information–newsletter that is sent to its readers. Starting by the passage on the pathway of the internet, cyber culture and web journalism, and some concepts of news production, it is aimed to describe and evaluate, from the content of the proposed analyzes by Laurence Bardin, about general aspects, strategies and bulletin samples. Bringing a little of its history and description of the journalistic and news main points. The present paper approaches how these criteria, news value, and tools are chosen and used to reach the effectiveness of the proposal to take specific and fast information to the readers. Questions regarded to the opinionated character from the content were also stated as a way to evaluate its expressiveness

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work, entitled Websislapam: People Rating System Based on Web Technologies, allows the creation of questionnaires, and the organization of entities and people who participate in evaluations. Entities collect data from people with the help of resources that reduce typing mistakes. The Websislapam maintains a database and provides graphical reporting, which enable the analysis of those tested. Developed using Web technologies such as PHP, Javascript, CSS, and others. Developed with the paradigm of object-oriented programming and the MySQL DBMS. For the theoretical basis, research in the areas of System Database, Web Technologies and Web Engineering were performed. It was determined the evaluation process, systems and Web-based applications, Web and System Engineering Database. Technologies applied in the implementation of Websislapam been described. In a separate chapter presented the main features and artifacts used in the development of Websislapam. A case study demonstrates the practical use of the system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work of biochemists and molecular biologists often is dependent or extremely favored by a preliminary computer analysis. Thus, the development of an efficient and friendly computational tool is very important. In this work, we developed a package of programs in Javascript language which can be used online or locally. The programs depend exclusively of Web browsers and are compatible with Internet Explorer, Opera, Mozilla Firefox and Google Chrome. With the EBiAn package it is can perform the main analysis and manipulation of DNA, RNA, proteins and peptides sequences. The programs can be freely accessed and adapted or modified to generate new programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High Throughput Sequencing capabilities have made the process of assembling a transcriptome easier, whether or not there is a reference genome. But the quality of a transcriptome assembly must be good enough to capture the most comprehensive catalog of transcripts and their variations, and to carry out further experiments on transcriptomics. There is currently no consensus on which of the many sequencing technologies and assembly tools are the most effective. Many non-model organisms lack a reference genome to guide the transcriptome assembly. One question, therefore, is whether or not a reference-based genome assembly gives better results than de novo assembly. The blood-sucking insect Rhodnius prolixus-a vector for Chagas disease-has a reference genome. It is therefore a good model on which to compare reference-based and de novo transcriptome assemblies. In this study, we compared de novo and reference-based genome assembly strategies using three datasets (454, Illumina, 454 combined with Illumina) and various assembly software. We developed criteria to compare the resulting assemblies: the size distribution and number of transcripts, the proportion of potentially chimeric transcripts, how complete the assembly was (completeness evaluated both through CEGMA software and R. prolixus proteome fraction retrieved). Moreover, we looked for the presence of two chemosensory gene families (Odorant-Binding Proteins and Chemosensory Proteins) to validate the assembly quality. The reference-based assemblies after genome annotation were clearly better than those generated using de novo strategies alone. Reference-based strategies revealed new transcripts, including new isoforms unpredicted by automatic genome annotation. However, a combination of both de novo and reference-based strategies gave the best result, and allowed us to assemble fragmented transcripts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amperometry coupled to flow injection analysis (FIA) and to batch injection analysis (BIA) was used for the rapid and precise quantification of ciclopirox olamine in pharmaceutical products. The favourable hydrodynamic conditions provided by both techniques allowed a very high throughput (more than 300 injections per hour) with good linear range (2.0200 mu mol L-1) and low limits of detection (below 1.0 mu mol?L-1). The results obtained were compared with titration recommended by the American Pharmacopoeia and also using capillary electrophoresis. Good agreement between all results were achieved, demonstrating the good performance of amperometry combined with FIA and BIA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mach number and thermal effects on the mechanisms of sound generation and propagation are investigated in spatially evolving two-dimensional isothermal and non-isothermal mixing layers at Mach number ranging from 0.2 to 0.4 and Reynolds number of 400. A characteristic-based formulation is used to solve by direct numerical simulation the compressible Navier-Stokes equations using high-order schemes. The radiated sound is directly computed in a domain that includes both the near-field aerodynamic source region and the far-field sound propagation. In the isothermal mixing layer, Mach number effects may be identified in the acoustic field through an increase of the directivity associated with the non-compactness of the acoustic sources. Baroclinic instability effects may be recognized in the non-isothermal mixing layer, as the presence of counter-rotating vorticity layers, the resulting acoustic sources being found less efficient. An analysis based on the acoustic analogy shows that the directivity increase with the Mach number can be associated with the emergence of density fluctuations of weak amplitude but very efficient in terms of noise generation at shallow angle. This influence, combined with convection and refraction effects, is found to shape the acoustic wavefront pattern depending on the Mach number.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] El Trabajo de Fin de Grado, Monitor Web de Expresiones Regulares (MWRegEx), es una herramienta basada en tecnologías web, desarrollada usando el entorno Visual Studio. El objetivo principal de la aplicación es dar apoyo a la docencia de expresiones regulares, en el marco de la enseñanza del manejo de ristras de caracteres en las asignaturas de programación del Grado en Ingeniería Informática.  La aplicación permite obtener el dibujo de un autómata de una expresión regular, facilitando su comprensión; además, permite aplicar la expresión a diferentes ristras de caracteres, mostrando las coincidencias encontradas, y ofrece una versión de la expresión adaptada a su uso en literales string de lenguajes como Java y otros. La herramienta se ha implementado en dos partes: un servicio web, escrito en C#, donde se realizan todos los análisis de las expresiones regulares y las ristras a contrastar; y un cliente web, implementado usando tecnología asp.net, con JavaScript y JQuery, que gestiona la interfaz de usuario y muestra los resultados. Esta separación permite que el servicio web pueda ser reutilizado con otras aplicaciones cliente. El autómata que representa una expresión regular esta dibujado usando la librería Raphaël JavaScript que permite manejar los elementos SVG. Cada elemento de la expresión regular tiene un dibujo diferente y único para así diferenciarlo. Toda la interfaz gráfica de usuario está internacionalizada de manera tal que pueda adaptarse a diferentes idiomas y regiones sin la necesidad de realizar cambios de ingeniería ni en el código. Tanto el servicio web como la parte cliente están estructurados para que se puedan agregar nuevas modificaciones sin que esto genere una onda expansiva a lo largo de las diversas clases existentes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To understand a city and its urban structure it is necessary to study its history. This is feasible through GIS (Geographical Information Systems) and its by-products on the web. Starting from a cartographic view they allow an initial understanding of, and a comparison between, present and past data together with an easy and intuitive access to database information. The research done led to the creation of a GIS for the city of Bologna. It is based on varied data such as historical map, vector and alphanumeric historical data, etc.. After providing information about GIS we thought of spreading and sharing the collected data on the Web after studying two solutions available on the market: Web Mapping and WebGIS. In this study we discuss the stages, beginning with the development of Historical GIS of Bologna, which led to the making of a WebGIS Open Source (MapServer and Chameleon) and the Web Mapping services (Google Earth, Google Maps and OpenLayers).