986 resultados para libreria, Software, Database, ORM, transazionalità


Relevância:

30.00% 30.00%

Publicador:

Resumo:

For many years psychological research on facial expression of emotion has relied heavily on a recognition paradigm based on posed static photographs. There is growing evidence that there may be fundamental differences between the expressions depicted in such stimuli and the emotional expressions present in everyday life. Affective computing, with its pragmatic emphasis on realism, needs examples of natural emotion. This paper describes a unique database containing recordings of mild to moderate emotionally coloured responses to a series of laboratory based emotion induction tasks. The recordings are accompanied by information on self-report of emotion and intensity, continuous trace-style ratings of valence and intensity, the sex of the participant, the sex of the experimenter, the active or passive nature of the induction task and it gives researchers the opportunity to compare expressions from people from more than one culture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Popular approaches in human tissue-based biomarker discovery include tissue microarrays (TMAs) and DNA Microarrays (DMAs) for protein and gene expression profiling respectively. The data generated by these analytic platforms, together with associated image, clinical and pathological data currently reside on widely different information platforms, making searching and cross-platform analysis difficult. Consequently, there is a strong need to develop a single coherent database capable of correlating all available data types.

Method: This study presents TMAX, a database system to facilitate biomarker discovery tasks. TMAX organises a variety of biomarker discovery-related data into the database. Both TMA and DMA experimental data are integrated in TMAX and connected through common DNA/protein biomarkers. Patient clinical data (including tissue pathological data), computer assisted tissue image and associated analytic data are also included in TMAX to enable the truly high throughput processing of ultra-large digital slides for both TMAs and whole slide tissue digital slides. A comprehensive web front-end was built with embedded XML parser software and predefined SQL queries to enable rapid data exchange in the form of standard XML files.

Results & Conclusion: TMAX represents one of the first attempts to integrate TMA data with public gene expression experiment data. Experiments suggest that TMAX is robust in managing large quantities of data from different sources (clinical, TMA, DMA and image analysis). Its web front-end is user friendly, easy to use, and most importantly allows the rapid and easy data exchange of biomarker discovery related data. In conclusion, TMAX is a robust biomarker discovery data repository and research tool, which opens up the opportunities for biomarker discovery and further integromics research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel admission control policy for database queries. Our methodology uses system measurements of CPU utilization and query backlogs to determine interference between queries in execution on the same database server. Query interference may arise due to the concurrent access of hardware and software resources and can affect performance in positive and negative ways. Specifically our admission control considers the mix of jobs in service and prioritizes the query classes consuming CPU resources more efficiently. The policy ignores I/O subsystems and is therefore highly appropriate for in-memory databases. We validate our approach in trace-driven simulation and show performance increases of query slowdowns and throughputs compared to first-come first-served and shortest expected processing time first scheduling. Simulation experiments are parameterized from system traces of a SAP HANA in-memory database installation with TPC-H type workloads. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The newly updated inventory of palaeoecological research in Latin America offers an important overview of sites available for multi-proxy and multi-site purposes. From the collected literature supporting this inventory, we collected all available age model metadata to create a chronological database of 5116 control points (e.g. 14C, tephra, fission track, OSL, 210Pb) from 1097 pollen records. Based on this literature review, we present a summary of chronological dating and reporting in the Neotropics. Difficulties and recommendations for chronology reporting are discussed. Furthermore, for 234 pollen records in northwest South America, a classification system for age uncertainties is implemented based on chronologies generated with updated calibration curves. With these outcomes age models are produced for those sites without an existing chronology, alternative age models are provided for researchers interested in comparing the effects of different calibration curves and age–depth modelling software, and the importance of uncertainty assessments of chronologies is highlighted. Sample resolution and temporal uncertainty of ages are discussed for different time windows, focusing on events relevant for research on centennial- to millennial-scale climate variability. All age models and developed R scripts are publicly available through figshare, including a manual to use the scripts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O objetivo deste trabalho é o desenvolvimento de frameworks de testes automáticos de software. Este tipo de testes normalmente está associado ao modelo evolucionário e às metodologias ágeis de desenvolvimento de software, enquanto que os testes manuais estão relacionados com o modelo em cascata e as metodologias tradicionais. Como tal foi efetuado um estudo comparativo sobre os tipos de metodologias e de testes existentes, para decidir quais os que melhor se adequavam ao projeto e dar resposta à questão "Será que realmente compensa realizar testes (automáticos)?". Finalizado o estudo foram desenvolvidas duas frameworks, a primeira para a implementação de testes funcionais e unitários sem dependências a ser utilizada pelos estagiários curriculares da LabOrders, e a segunda para a implementação de testes unitários com dependências externas de base de dados e serviços, a ser utilizada pelos funcionários da empresa. Nas últimas duas décadas as metodologias ágeis de desenvolvimento de software não pararam de evoluir, no entanto as ferramentas de automação não conseguiram acompanhar este progresso. Muitas áreas não são abrangidas pelos testes e por isso alguns têm de ser feitos manualmente. Posto isto foram criadas várias funcionalidades inovadoras para aumentar a cobertura dos testes e tornar as frameworks o mais intuitivas possível, nomeadamente: 1. Download automático de ficheiros através do Internet Explorer 9 (e versões mais recentes). 2. Análise do conteúdo de ficheiros .pdf (através dos testes). 3. Obtenção de elementos web e respetivos atributos através de código jQuery utilizando a API WebDriver com PHP bindings. 4. Exibição de mensagens de erro personalizadas quando não é possível encontrar um determinado elemento. As frameworks implementadas estão também preparadas para a criação de outros testes (de carga, integração, regressão) que possam vir a ser necessários no futuro. Foram testadas em contexto de trabalho pelos colaboradores e clientes da empresa onde foi realizado o projeto de mestrado e os resultados permitiram concluir que a adoção de uma metodologia de desenvolvimento de software com testes automáticos pode aumentar a produtividade, reduzir as falhas e potenciar o cumprimento de orçamentos e prazos dos projetos das organizações.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente manual de uso del software de visualización de datos “Ocean Data View” (ODV) describe la exploración, análisis y visualización de datos oceanográficos según el formato de la colección mundial de base de datos del océano “World Ocean Database” (WOD). El manual comprende 6 ejercicios prácticos donde se describe paso a paso la creación de las metavariables, la importación de los datos y su visualización mediante mapas de latitud, longitud y gráficos de dispersión, secciones verticales y series de tiempo. Se sugiere el uso extensivo del ODV para la visualización de datos oceanográficos por el personal científico del IMARPE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this work was developing a query processing system using software agents. Open Agent Architecture framework is used for system development. The system supports queries in both Hindi and Malayalam; two prominent regional languages of India. Natural language processing techniques are used for meaning extraction from the plain query and information from database is given back to the user in his native language. The system architecture is designed in a structured way that it can be adapted to other regional languages of India. . This system can be effectively used in application areas like e-governance, agriculture, rural health, education, national resource planning, disaster management, information kiosks etc where people from all walks of life are involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today's complicated computing environment, managing data has become the primary concern of all industries. Information security is the greatest challenge and it has become essential to secure the enterprise system resources like the databases and the operating systems from the attacks of the unknown outsiders. Our approach plays a major role in detecting and managing vulnerabilities in complex computing systems. It allows enterprises to assess two primary tiers through a single interface as a vulnerability scanner tool which provides a secure system which is also compatible with the security compliance of the industry. It provides an overall view of the vulnerabilities in the database, by automatically scanning them with minimum overhead. It gives a detailed view of the risks involved and their corresponding ratings. Based on these priorities, an appropriate mitigation process can be implemented to ensure a secured system. The results show that our approach could effectively optimize the time and cost involved when compared to the existing systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presentation at the 1997 Dagstuhl Seminar "Evaluation of Multimedia Information Retrieval", Norbert Fuhr, Keith van Rijsbergen, Alan F. Smeaton (eds.), Dagstuhl Seminar Report 175, 14.04. - 18.04.97 (9716). - Abstract: This presentation will introduce ESCHER, a database editor which supports visualization in non-standard applications in engineering, science, tourism and the entertainment industry. It was originally based on the extended nested relational data model and is currently extended to include object-relational properties like inheritance, object types, integrity constraints and methods. It serves as a research platform into areas such as multimedia and visual information systems, QBE-like queries, computer-supported concurrent work (CSCW) and novel storage techniques. In its role as a Visual Information System, a database editor must support browsing and navigation. ESCHER provides this access to data by means of so called fingers. They generalize the cursor paradigm in graphical and text editors. On the graphical display, a finger is reflected by a colored area which corresponds to the object a finger is currently pointing at. In a table more than one finger may point to objects, one of which is the active finger and is used for navigating through the table. The talk will mostly concentrate on giving examples for this type of navigation and will discuss some of the architectural needs for fast object traversal and display. ESCHER is available as public domain software from our ftp site in Kassel. The portable C source can be easily compiled for any machine running UNIX and OSF/Motif, in particular our working environments IBM RS/6000 and Intel-based LINUX systems. A porting to Tcl/Tk is under way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este trabajo se describe la utilización de herramientas de software libre, básicamente GRASS y R, para obtener una serie de mapas de coberturas del suelo (1976-2006) a partir de imágenes de satélite Landsat MSS y Landsat TM. Se trata de un proyecto concedido a un año, por lo que se requería una metodología que permitiera realizar el análisis de forma rápida y sencilla, aún tratando de aplicar técnicas de clasificación avanzadas. Dada la complejidad del trabajo y la premura de tiempo, se ha tratado de automatizar gran parte del trabajo mediante diversos scripts con BASH y R. (...)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este artículo muestra cómo con bajo coste y riesgo se puede desarrollar un sistema de planificación de viaje multimodal, basado en un enfoque de código abierto y estándares ‘de facto’. Se ha desarrollado completamente una solución de código abierto para un sistema de información de transporte público puerta a puerta basado en estándares ‘de facto’. El cálculo de rutas se realiza mediante Graphserver, mientras que la cartografía se basa en OpenStreetMap. También se ha demostrado cómo exportar una base de datos real de horarios de transporte público como la del operador ETM (Empresa de Transporte Metropolitano de València) a la especificación de Google Transit, para permitir el cálculo de rutas, tanto desde nuestro prototipo como desde Google Transit

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parametric software effort estimation models consisting on a single mathematical relationship suffer from poor adjustment and predictive characteristics in cases in which the historical database considered contains data coming from projects of a heterogeneous nature. The segmentation of the input domain according to clusters obtained from the database of historical projects serves as a tool for more realistic models that use several local estimation relationships. Nonetheless, it may be hypothesized that using clustering algorithms without previous consideration of the influence of well-known project attributes misses the opportunity to obtain more realistic segments. In this paper, we describe the results of an empirical study using the ISBSG-8 database and the EM clustering algorithm that studies the influence of the consideration of two process-related attributes as drivers of the clustering process: the use of engineering methodologies and the use of CASE tools. The results provide evidence that such consideration conditions significantly the final model obtained, even though the resulting predictive quality is of a similar magnitude.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Genomic Threading Database currently contains structural annotations for the genomes of over 100 recently sequenced organisms. Annotations are carried out by using our modified GenTHREADER software and through implementing grid technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Usually, a Petri net is applied as an RFID model tool. This paper, otherwise, presents another approach to the Petri net concerning RFID systems. This approach, called elementary Petri net inside an RFID distributed database, or PNRD, is the first step to improve RFID and control systems integration, based on a formal data structure to identify and update the product state in real-time process execution, allowing automatic discovery of unexpected events during tag data capture. There are two main features in this approach: to use RFID tags as the object process expected database and last product state identification; and to apply Petri net analysis to automatically update the last product state registry during reader data capture. RFID reader data capture can be viewed, in Petri nets, as a direct analysis of locality for a specific transition that holds in a specific workflow. Following this direction, RFID readers storage Petri net control vector list related to each tag id is expected to be perceived. This paper presents PNRD cornerstones and a PNRD implementation example in software called DEMIS Distributed Environment in Manufacturing Information Systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bergkvist insjön AB is a sawmill yard which is capable of producing 350,000 cubic meter of timber every year this requires lot of internal resources. Sawmill operations can be classified as unloading, sorting, storage and production of timber. In the company we have trucks arriving at random they have to be unloaded and sent back at the earliest to avoid queuing up of trucks creating a problem for truck owners. The sawmill yard has to operate with two log stackers that does several tasks including transporting the logs from trucks to measurement station where the logs will be sorted into classes and dropped into pockets from pockets to the sorted timber yard where they are stored and finally from there to sawmill for final processing. The main issue that needs to be answered here is the lining up trucks that are waiting to be unload, creating a problem for both sawmill as well as the truck owners and given huge production volume, it is certain that handling of resources is top priority. A key challenge in handling of resources would be unloading of trucks and finding a way to optimize internal resources.To address this problem i have experimented on different ways of using internal resources, i have designed different cases, in case 1 we have both the log stackers working on sawmill and measurement station. The main objective of having this case is to make sawmill and measurement station to work all the time. Then in case 2, i have divided the work between both the log stackers, one log stacker will be working on sawmill and pocket_control and second log stacker will be working on measurement station and truck. Then in case 3 we have only one log stacker working on all the agents, this case was designed to reduce cost of production, as the experiment cannot be done in real-time due to operational cost, for this purpose simulation is used, preliminary investigation into simulation results suggested that case 2 is the best option has it reduced waiting time of trucks considerably when compared with other cases and it showed 50% increase in optimizing internal resources.