996 resultados para libreria, Software, Database, ORM, transazionalità
Resumo:
Aquest treball tracta sobre la creació d'un projecte de programari lliure, portada a terme des del seu inici fins a la seva conversió, en un projecte que conta amb el suport d'una comunitat d'usuaris. Específicament, la finalitat del projecte de programari lliure generat, és la creació d'una aplicació que sigui capaç de guiar de forma gràfica, ràpida i intuïtiva a l'usuari a través del procés de creació de consultes SQL per a la base de dades postgreSQL.
Resumo:
El proyecto consiste en el desarrollo de una distribución Live, que contiene el software colaborativo Teambox. Esta distribución puede ser instalada en un equipo físico y ser puesta en produccion.
Resumo:
Actualmente muchas empresas ya sean grandes o pequeñas, se plantean implantar en su organización un sistema de información que les ayude en la toma de decisiones. Una de las tendencias actuales es implantar un sistema de BI (Business Intelligence). Muchas organizaciones implantan este tipo de sistemas sin haber realizado un estudio previo de las necesidades y problemas de la empresa, de los beneficios que se pretenden obtener con la implantación del sistema, de la aceptación y formación de los usuarios, de las herramientas y proveedores de BI,... y así un largo etcétera de situaciones y condiciones que se deberían tener en cuenta para la implantación de esta clase de sistemas en cualquier tipo de organización, ya sea grande o pequeña.En el presente artículo se realiza una síntesis de la amalgama de información existente en los sistemas de BI, se realiza una definición aclaratoria, así como también se expone su estado actual, beneficios, factores de riesgo, pasos para conseguir una implantación exitosa y los componentes y herramientas open source de los sistemas de BI. Primero se exponen las principales herramientas open source existentes en el mercado y se profundiza más especialmente en la Suite open source de BI Pentaho.
Resumo:
Una empresa de grandària mitja s'embarca en un projecte de migració del seu sistema telefònic, basat en un model tradicional i amb múltiples problemes, tant a nivell de funcionalitat com de costos, i decideix apostar per una solució basada en programari lliure, amb Linux i Asterisk com peces angulars del disseny del projecte.
Resumo:
Trabajo fin de Master.
Resumo:
Aquest treball descriu els principis de disseny i els components essencials d'un hipotètic programa informàtic que té per finalitat facilitar el procés d'autoajuda i que també pot ser utilitzat com a eina de desenvolupament personal i motivació. Prèviament, l'autor fa una revisió dels mètodes existents, des de l'èxit dels llibres d'autoajuda del segle XX fins al'expansió de la interactivitat impulsada pel desenvolupament de les tecnologiesinformàtiques. A través d'aquest recorregut es constata la pobre implantació de les novestecnologies com a instruments populars d'autoajuda i s'advoca per la creació i ús deprogrames informàtics flexibles i generalistes com a mitjà de suport psicològic.
Resumo:
The decision to publish educational materials openly and under free licenses brings up the challenge of doing it in a sustainable way. Some lessons can be learned from the business models for production, maintenance and distribution of Free and Open Source Software. The Free Technology Academy (FTA) has taken on these challenges and has implemented some of these models. We briefly review the FTA educational programme, methodologies and organisation, and see to which extent these models are proving successful in the case of the FTA.
Resumo:
This case study introduces our continuous work to enhance the virtual classroom in order to provide faculty and students with an environment open to their needs, compliant with learning standards and, therefore compatible with other e-learning environments, and based on open source software. The result is a modulable, sustainable and interoperable learning environment that can be adapted to different teaching and learning situations by incorporating the LMS integrated tools as well as wikis, blogs, forums and Moodle activities among others.
Resumo:
Background. A software based tool has been developed (Optem) to allow automatize the recommendations of the Canadian Multiple Sclerosis Working Group for optimizing MS treatment in order to avoid subjective interpretation. METHODS: Treatment Optimization Recommendations (TORs) were applied to our database of patients treated with IFN beta1a IM. Patient data were assessed during year 1 for disease activity, and patients were assigned to 2 groups according to TOR: "change treatment" (CH) and "no change treatment" (NCH). These assessments were then compared to observed clinical outcomes for disease activity over the following years. RESULTS: We have data on 55 patients. The "change treatment" status was assigned to 22 patients, and "no change treatment" to 33 patients. The estimated sensitivity and specificity according to last visit status were 73.9% and 84.4%. During the following years, the Relapse Rate was always higher in the "change treatment" group than in the "no change treatment" group (5 y; CH: 0.7, NCH: 0.07; p < 0.001, 12 m - last visit; CH: 0.536, NCH: 0.34). We obtained the same results with the EDSS (4 y; CH: 3.53, NCH: 2.55, annual progression rate in 12 m - last visit; CH: 0.29, NCH: 0.13). CONCLUSION: Applying TOR at the first year of therapy allowed accurate prediction of continued disease activity in relapses and disability progression.
Resumo:
Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation. The proposed system is completely distributed and automates the mechanisms for adapting the logical network to the offered load. The system is able to manage dynamically a logical network such as a virtual path network in ATM or a label switched path network in MPLS or GMPLS. The system design and implementation is based on a multi-agent system (MAS) which make the decisions of when and how to change a logical path. Despite the lack of a centralised global network view, results show that MAS manages the network resources effectively, reducing the connection blocking probability and, therefore, achieving better utilisation of network resources. We also include details of its architecture and implementation
Resumo:
Teaching and research are organised differently between subject domains: attempts to construct typologies of higher education institutions, however, often do not include quantitative indicators concerning subject mix which would allow systematic comparisons of large numbers of higher education institutions among different countries, as the availability of data for such indicators is limited. In this paper, we present an exploratory approach for the construction of such indicators. The database constructed in the AQUAMETH project, which includes also data disaggregated at the disciplinary level, is explored with the aim of understanding patterns of subject mix. For six European countries, an exploratory and descriptive analysis of staff composition divided in four large domains (medical sciences, engineering and technology, natural sciences and social sciences and humanities) is performed, which leads to a classification distinguishing between specialist and generalist institutions. Among the latter, a further distinction is made based on the presence or absence of a medical department. Preliminary exploration of this classification and its comparison with other indicators show the influence of long term dynamics on the subject mix of individual higher education institutions, but also underline disciplinary differences, for example regarding student to staff ratios, as well as national patterns, for example regarding the number of PhD degrees per 100 undergraduate students. Despite its many limitations, this exploratory approach allows defining a classification of higher education institutions that accounts for a large share of differences between the analysed higher education institutions.
Resumo:
This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.
Resumo:
Expert supervision systems are software applications specially designed to automate process monitoring. The goal is to reduce the dependency on human operators to assure the correct operation of a process including faulty situations. Construction of this kind of application involves an important task of design and development in order to represent and to manipulate process data and behaviour at different degrees of abstraction for interfacing with data acquisition systems connected to the process. This is an open problem that becomes more complex with the number of variables, parameters and relations to account for the complexity of the process. Multiple specialised modules tuned to solve simpler tasks that operate under a co-ordination provide a solution. A modular architecture based on concepts of software agents, taking advantage of the integration of diverse knowledge-based techniques, is proposed for this purpose. The components (software agents, communication mechanisms and perception/action mechanisms) are based on ICa (Intelligent Control architecture), software middleware supporting the build-up of applications with software agent features