36 resultados para Exception Handling. Exceptional Behavior. Exception Policy. Software Testing. Design Rules
Resumo:
Usability is critical to consider an interactive software system successful. Usability testing and evaluation during product development have gained wide acceptance as a strategy to improve product quality. Early introduction of usability perspectives in a product is very important in order to provide a clear visibility of the quality aspects not only for the developers, but also for the testing users as well. However, usability evaluation and testing are not commonly taken into consideration as an essential element of the software development process. Then, this paper exposes a proposal to introduce usability evaluation and testing within a software development through reuse of software artifacts. Additionally, it suggests the introduction of an auditor within the classification of actors for usability tests. It also proposes an improvement of checklists used for heuristics evaluation, adding quantitative and qualitative aspects to them
Resumo:
The empirical evidence testing the validity of the rational partisan theory (RPT) has been mixed. In this article, we argue that the inclusion of other macroeconomic policies and the presence of an independent central bank can partly contribute to explain this inconclusiveness. This article expands Alesina s (1987) RPT model to include an extra policy and an independent central bank. With these extensions, the implications of RPT are altered signi ficantly. In particular, when the central bank is more concerned about output than public spending (an assumption made by many papers in this literature), then the direct relationship between in flation and output derived in Alesina (1987) never holds. Keywords: central bank, conservativeness, political uncertainty. JEL Classi fication: E58, E63.
Resumo:
This paper proposes a method to conduct inference in panel VAR models with cross unit interdependencies and time variations in the coefficients. The approach can be used to obtain multi-unit forecasts and leading indicators and to conduct policy analysis in a multiunit setups. The framework of analysis is Bayesian and MCMC methods are used to estimate the posterior distribution of the features of interest. The model is reparametrized to resemble an observable index model and specification searches are discussed. As an example, we construct leading indicators for inflation and GDP growth in the Euro area using G-7 information.
Resumo:
Unemployment rates in developed countries have recently reached levels not seenin a generation, and workers of all ages are facing increasing probabilities of losingtheir jobs and considerable losses in accumulated assets. These events likely increasethe reliance that most older workers will have on public social insurance programs,exactly at a time that public finances are suffering from a large drop in contributions.Our paper explicitly accounts for employment uncertainty and unexpectedwealth shocks, something that has been relatively overlooked in the literature, butthat has grown in importance in recent years. Using administrative and householdlevel data we empirically characterize a life-cycle model of retirement and claimingdecisions in terms of the employment, wage, health, and mortality uncertainty facedby individuals. Our benchmark model explains with great accuracy the strikinglyhigh proportion of individuals who claim benefits exactly at the Early RetirementAge, while still explaining the increased claiming hazard at the Normal RetirementAge. We also discuss some policy experiments and their interplay with employmentuncertainty. Additionally, we analyze the effects of negative wealth shocks on thelabor supply and claiming decisions of older Americans. Our results can explainwhy early claiming has remained very high in the last years even as the early retirementpenalties have increased substantially compared with previous periods, andwhy labor force participation has remained quite high for older workers even in themidst of the worse employment crisis in decades.
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
One plausible mechanism through which financial market shocks may propagate across countriesis through the impact that past gains and losses may have on investors risk aversion and behavior. This paper presents a stylized model illustrating how heterogeneous changes in investors risk aversion affect portfolio allocation decisions and stock prices. Our empirical findings suggest that when funds returns are below average, they adjust their holdings toward the average (or benchmark) portfolio. In so doing, funds tend to sell the assets of countries in which they were overweight , increasing their exposure to countries in which they were underweight. Based on this insight, the paper constructs an index of financial interdependence which reflects the extent to which countries share overexposed funds. The index helps in explain the pattern of stock market comovement across countries. Moreover, a comparison of this interdependence measure to indices of trade or commercial bank linkages indicates that our index can improve predictions about which countries are more likely to be affected by contagion from crisis centers.
Resumo:
Les missions Petersberg són l'operatiu militar més ambiciós organitzat per la Unió Europea en el desenvolupament de la CSDP, Política Europea de Seguretat i Defensa. Amb l'objectiu d'aconseguir una organització efectiva y funcional d'aquestes missions, és desitjable que les cultures estratègiques dels diferents Estats membres siguin, en gran mesura, compatibles en benefici d'una cultura estratègica europea amb directrius clares. Aquest estudi compara les cultures estratègiques d'Alemanya, el Regne Unit i França en referència al seu nivell de compatibilitat contrastant-les amb dos casos recents, exemples paradigmàtics de cultures estratègiques integrals. D'aquesta manera, pretenem descriure les circumstàncies en què es desenvolupen les missions Petersberg.
Resumo:
The Great Depression spurred State ownership in Western capitalist countries. Germany was no exception; the last governments of the Weimar Republic took over firms in diverse sectors. Later, the Nazi regime transferred public ownership and public services to the private sector. In doing so, they went against the mainstream trends in the Western capitalist countries, none of which systematically reprivatized firms during the 1930s. Privatization in Nazi Germany was also unique in transferring to private hands the delivery f public services previously provided by government. The firms and the services transferred to private ownership belonged to diverse sectors. Privatization was part of an intentional policy with multiple objectives and was not ideologically driven. As in many recent privatizations, particularly within the European Union, strong financial restrictions were a central motivation. In addition, privatization was used as a political tool to enhance support for the government and for the Nazi Party.
Resumo:
The Great Depression spurred State ownership in Western capitalist countries. Germany was no exception; the last governments of the Weimar Republic took over firms in diverse sectors. Later, the Nazi regime transferred public ownership and public services to the private sector. In doing so, they went against the mainstream trends in the Western capitalist countries, none of which systematically reprivatized firms during the 1930s. Privatization in Nazi Germany was also unique in transferring to private hands the delivery f public services previously provided by government. The firms and the services transferred to private ownership belonged to diverse sectors. Privatization was part of an intentional policy with multiple objectives and was not ideologically driven. As in many recent privatizations, particularly within the European Union, strong financial restrictions were a central motivation. In addition, privatization was used as a political tool to enhance support for the government and for the Nazi Party.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
This paper examines statistical analysis of social reciprocity at group, dyadic, and individual levels. Given that testing statistical hypotheses regarding social reciprocity can be also of interest, a statistical procedure based on Monte Carlo sampling has been developed and implemented in R in order to allow social researchers to describe groups and make statistical decisions.
Resumo:
Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.
Resumo:
Debido a la necesidad de diferenciarse y hacer frente a la competencia, las empresas han apostado por desarrollar operaciones que den valor al cliente, por eso muchas de ellas han visto en las herramientas lean la oportunidad para mejorar sus operaciones. Esta mejora implica la reducción de dinero, personas, equipos grandes, inventario y espacio, con dos objetivos: eliminar despilfarro y reducir la variabilidad. Para conseguir los objetivos estratégicos de la empresa es imprescindible qué éstos estén alineados con los planes de la gerencia a nivel medio y a su vez con el trabajo realizado por los empleados para asegurar que cada persona está alineada en la misma dirección y al mismo tiempo. Ésta es la filosofía de la planificación estratégica. Por ello uno de los objetivos de este proyecto será el desarrollar una herramienta que facilite la exposición de los objetivos de la empresa y la comunicación de los mismos a todos los niveles de la organización para a partir de ellos y tomando como referencia la necesidad de reducir inventarios en la cadena de suministro se realizará un estudio de la producción de un componente de control del aerogenerador para conseguir nivelarla y reducir su inventario de producto terminado. Los objetivos particulares en este apartado serán reducir el inventario en un 28%, nivelar la producción reduciendo la variabilidad del 31% al 24%, mantener un stock máximo de 24 unidades garantizando el suministro ante una demanda variable, incrementar la rotación del inventario en un 10% y establecer un plan de acción para reducir el lead time entre un 40-50%. Todo ello será posible gracias a la realización del mapa de valor presente y futuro para eliminar desperdicios y crear un flujo continuo y el cálculo de un supermercado que mantenga el stock en un nivel óptimo.
Resumo:
This paper develops an approach to rank testing that nests all existing rank tests andsimplifies their asymptotics. The approach is based on the fact that implicit in every ranktest there are estimators of the null spaces of the matrix in question. The approach yieldsmany new insights about the behavior of rank testing statistics under the null as well as localand global alternatives in both the standard and the cointegration setting. The approach alsosuggests many new rank tests based on alternative estimates of the null spaces as well as thenew fixed-b theory. A brief Monte Carlo study illustrates the results.