908 resultados para Capability Maturity Model for Software
Resumo:
Una empresa de grandària mitja s'embarca en un projecte de migració del seu sistema telefònic, basat en un model tradicional i amb múltiples problemes, tant a nivell de funcionalitat com de costos, i decideix apostar per una solució basada en programari lliure, amb Linux i Asterisk com peces angulars del disseny del projecte.
Resumo:
Creació d'una 'aplicació de gestió comercial i comptable partint de la ja existent en l’empresa Ph Systems S.L., seguint per tant la tecnologia utilitzada en aquesta aplicació, desenvolupada en Visual Basic 6 i utilitzant uns objectes i llibreries propis per facilitat l’enllaç dels formularis amb les taules de la base de dades utilitzant tant connexions ODBC com connexions OLE DB. El gestor de base de dades seleccionat per realitzar el projecte serà Oracle, que és el que actualment s’utilitza per les dades de l’aplicació de gestió d’estocs. Tasques a realitzar:Anàlisi dels requeriments del sistema.Disseny dels diferents mòduls de la futura aplicació.Obtenir una base de dades ben definida partint de la ja existent.Implementació dels mòduls de la futura aplicació.Els objectius d’aprenentatge d’aquest projecte són els següents:Millora del coneixement de la gestió d’una base de dades OracleMillora en l’aprenentatge del llenguatge de programació Visual Basic 6
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
L'educació és, avui, una activitat important per a les institucions museístiques. La tasca educativa dels museus es pot considerar, sobretot, educació no formal. Les TIC, posen a l'abast dels museus multitud d'eines. L'ús que se'n fa, varia d'una institució a una altre; ho veurem a partir d'estudis de cas de tres institucions museístiques. Els propis museus, grups de treball, autors, administracions... han detectat la necessitat d'analitzar l'ús que les institucions museístiques fan de les TIC amb finalitats educatives; es dóna una importància creixent al fet de disposar d'eines i metodologies el màxim d'homogènies per analitzar aquest ús; l'objectiu d'aquest treball va en aquesta direcció: avaluar l'ús i també -imprescindible- els resultats educatius reals. He optat per estructurar la informació en un programa informàtic. La justificació és doble: simplificar els processos d'adquisició, gestió i avaluació de la informació i per altre banda, assegurar, amb la implementació, que l'estructura conceptual és robusta. Això s'ha aconseguit raonablement; conscient però que , més important que el programa en sí, -per això no s'aprofundeix en el seu funcionament intern- és l'estructura de dades utilitzada; per arribar a aquest resultat, ha estat necessari reflexionar sobre determinats conceptes; s'han inclòs part d'aquestes reflexions perquè són les bases del model de dades utilitzat.
Resumo:
Els objectius del TFM consisteixen en canviar el model de negoci de programari lliure per a serveis webs.
Resumo:
Swain corrects the chi-square overidentification test (i.e., likelihood ratio test of fit) for structural equation models whethr with or without latent variables. The chi-square statistic is asymptotically correct; however, it does not behave as expected in small samples and/or when the model is complex (cf. Herzog, Boomsma, & Reinecke, 2007). Thus, particularly in situations where the ratio of sample size (n) to the number of parameters estimated (p) is relatively small (i.e., the p to n ratio is large), the chi-square test will tend to overreject correctly specified models. To obtain a closer approximation to the distribution of the chi-square statistic, Swain (1975) developed a correction; this scaling factor, which converges to 1 asymptotically, is multiplied with the chi-square statistic. The correction better approximates the chi-square distribution resulting in more appropriate Type 1 reject error rates (see Herzog & Boomsma, 2009; Herzog, et al., 2007).
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.
Resumo:
This paper examines the relationship between the equity premium and the risk free rate at three different maturities using post 1973 data fora panel of 7 OECD countries. We show the existence of subsample instabilities,of some cross country differences and of inconsistencies with the expectations theory of the term structure. We perform simulations using a standard consumptionbased CAPM model and demonstrate that the basic features of Mehra and Prescott's(1985) puzzle remain, regardless of the time period, the investment maturity and the country considered. Modifications of the basic setup are also considered.
Resumo:
This paper presents several applications to interest rate risk managementbased on a two-factor continuous-time model of the term structure of interestrates previously presented in Moreno (1996). This model assumes that defaultfree discount bond prices are determined by the time to maturity and twofactors, the long-term interest rate and the spread (difference between thelong-term rate and the short-term (instantaneous) riskless rate). Several newmeasures of ``generalized duration" are presented and applied in differentsituations in order to manage market risk and yield curve risk. By means ofthese measures, we are able to compute the hedging ratios that allows us toimmunize a bond portfolio by means of options on bonds. Focusing on thehedging problem, it is shown that these new measures allow us to immunize abond portfolio against changes (parallel and/or in the slope) in the yieldcurve. Finally, a proposal of solution of the limitations of conventionalduration by means of these new measures is presented and illustratednumerically.
Resumo:
This paper presents a two--factor model of the term structure ofinterest rates. We assume that default free discount bond prices aredetermined by the time to maturity and two factors, the long--term interestrate and the spread (difference between the long--term rate and theshort--term (instantaneous) riskless rate). Assuming that both factorsfollow a joint Ornstein--Uhlenbeck process, a general bond pricing equationis derived. We obtain a closed--form expression for bond prices andexamine its implications for the term structure of interest rates. We alsoderive a closed--form solution for interest rate derivatives prices. Thisexpression is applied to price European options on discount bonds andmore complex types of options. Finally, empirical evidence of the model'sperformance is presented.
Resumo:
In adult mammals, neural progenitors located in the dentate gyrus retain their ability to generate neurons and glia throughout lifetime. In rodents, increased production of new granule neurons is associated with improved memory capacities, while decreased hippocampal neurogenesis results in impaired memory performance in several memory tasks. In mouse models of Alzheimer's disease, neurogenesis is impaired and the granule neurons that are generated fail to integrate existing networks. Thus, enhancing neurogenesis should improve functional plasticity in the hippocampus and restore cognitive deficits in these mice. Here, we performed a screen of transcription factors that could potentially enhance adult hippocampal neurogenesis. We identified Neurod1 as a robust neuronal determinant with the capability to direct hippocampal progenitors towards an exclusive granule neuron fate. Importantly, Neurod1 also accelerated neuronal maturation and functional integration of new neurons during the period of their maturation when they contribute to memory processes. When tested in an APPxPS1 mouse model of Alzheimer's disease, directed expression of Neurod1 in cycling hippocampal progenitors conspicuously reduced dendritic spine density deficits on new hippocampal neurons, to the same level as that observed in healthy age-matched control animals. Remarkably, this population of highly connected new neurons was sufficient to restore spatial memory in these diseased mice. Collectively our findings demonstrate that endogenous neural stem cells of the diseased brain can be manipulated to become new neurons that could allow cognitive improvement.
Resumo:
El PFC tracte de la implementació d'un web de serveis informàtics basat en software de codi obert i per tal de emprendre-ho i tirar endavant com a negoci, s'ha dut a terme un estudi de les tecnologies i un pla d'empresa, mirant de treu-re la màxima rendibilitat al web.Els serveis escollits com a model de negoci són bàsicament quatre: el disseny web, el disseny de webs e-commerce o tendes virtuals, la implementació de gestors de recursos d'empresa coneguts com ERP (de l'anglès Enterprise Resources Planning), i la implementació de gestors de relacions amb els clients coneguts com CRM (de l'anglès Customer Relationship Management).El primer capítol es una introducció, el segon capítol tracta de el software lliure, que és, quins models de negoci aporta i els tipus de llicència existents, el tercer capítol es un pla d'empresa on es detalla els serveis que oferirem i com ho gestionarem, el quart capítol tracta de les tecnologies utilitzades i el per que, i finalment unes conclusions.La web, netsolucion.com, que s'ha dissenyat amb Wordpress, implica no tan sols el haver d'estudiar i d'aprendre aquesta plataforma sinó també totes les que s'ofereixen com a serveis, que son:Gestor de Bases de Dades MySQL i PHP per al disseny web, Prestashop per al disseny de tendes virtuals, OpenERP per a la implementació dels gestors de recursos de empresa (ERP's) i els gestors de relacions amb els clients (CRM's).
Resumo:
Contamination of weather radar echoes by anomalous propagation (anaprop) mechanisms remains a serious issue in quality control of radar precipitation estimates. Although significant progress has been made identifying clutter due to anaprop there is no unique method that solves the question of data reliability without removing genuine data. The work described here relates to the development of a software application that uses a numerical weather prediction (NWP) model to obtain the temperature, humidity and pressure fields to calculate the three dimensional structure of the atmospheric refractive index structure, from which a physically based prediction of the incidence of clutter can be made. This technique can be used in conjunction with existing methods for clutter removal by modifying parameters of detectors or filters according to the physical evidence for anomalous propagation conditions. The parabolic equation method (PEM) is a well established technique for solving the equations for beam propagation in a non-uniformly stratified atmosphere, but although intrinsically very efficient, is not sufficiently fast to be practicable for near real-time modelling of clutter over the entire area observed by a typical weather radar. We demonstrate a fast hybrid PEM technique that is capable of providing acceptable results in conjunction with a high-resolution terrain elevation model, using a standard desktop personal computer. We discuss the performance of the method and approaches for the improvement of the model profiles in the lowest levels of the troposphere.
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
Delta(9)-Tetrahydrocannabinol (THC) is frequently found in the blood of drivers suspected of driving under the influence of cannabis or involved in traffic crashes. The present study used a double-blind crossover design to compare the effects of medium (16.5 mg THC) and high doses (45.7 mg THC) of hemp milk decoctions or of a medium dose of dronabinol (20 mg synthetic THC, Marinol on several skills required for safe driving. Forensic interpretation of cannabinoids blood concentrations were attempted using the models proposed by Daldrup (cannabis influencing factor or CIF) and Huestis and coworkers. First, the time concentration-profiles of THC, 11-hydroxy-Delta(9)-tetrahydrocannabinol (11-OH-THC) (active metabolite of THC), and 11-nor-9-carboxy-Delta(9)-tetrahydrocannabinol (THCCOOH) in whole blood were determined by gas chromatography-mass spectrometry-negative ion chemical ionization. Compared to smoking studies, relatively low concentrations were measured in blood. The highest mean THC concentration (8.4 ng/mL) was achieved 1 h after ingestion of the strongest decoction. Mean maximum 11-OH-THC level (12.3 ng/mL) slightly exceeded that of THC. THCCOOH reached its highest mean concentration (66.2 ng/mL) 2.5-5.5 h after intake. Individual blood levels showed considerable intersubject variability. The willingness to drive was influenced by the importance of the requested task. Under significant cannabinoids influence, the participants refused to drive when they were asked whether they would agree to accomplish several unimportant tasks, (e.g., driving a friend to a party). Most of the participants reported a significant feeling of intoxication and did not appreciate the effects, notably those felt after drinking the strongest decoction. Road sign and tracking testing revealed obvious and statistically significant differences between placebo and treatments. A marked impairment was detected after ingestion of the strongest decoction. A CIF value, which relies on the molar ratio of main active to inactive cannabinoids, greater than 10 was found to correlate with a strong feeling of intoxication. It also matched with a significant decrease in the willingness to drive, and it matched also with a significant impairment in tracking performances. The mathematic model II proposed by Huestis et al. (1992) provided at best a rough estimate of the time of oral administration with 27% of actual values being out of range of the 95% confidence interval. The sum of THC and 11-OH-THC blood concentrations provided a better estimate of impairment than THC alone. This controlled clinical study points out the negative influence on fitness to drive after medium or high dose oral THC or dronabinol.