928 resultados para BENCHMARK
Resumo:
A new performance metric, Peak-Error Ratio (PER) has been presented to benchmark the performance of a class of neuron circuits to realize neuron activation function (NAF) and its derivative (DNAF). Neuron circuits, biased in subthreshold region, based on the asymmetric cross-coupled differential pair configuration and conventional configuration of applying small external offset voltage at the input have been compared on the basis of PER. It is shown that the technique of using transistor asymmetry in a cross-coupled differential pair performs on-par with that of applying external offset voltage. The neuron circuits have been experimentally prototyped and characterized as a proof of concept on the 1.5 mu m AMI technology.
Resumo:
A search for a narrow diphoton mass resonance is presented based on data from 3.0 fb^{-1} of integrated luminosity from p-bar p collisions at sqrt{s} = 1.96 TeV collected by the CDF experiment. No evidence of a resonance in the diphoton mass spectrum is observed, and upper limits are set on the cross section times branching fraction of the resonant state as a function of Higgs boson mass. The resulting limits exclude Higgs bosons with masses below 106 GeV at a 95% Bayesian credibility level (C.L.) for one fermiophobic benchmark model.
Resumo:
Authors of scholarly papers to a large extent base the decision on where to submit their manuscripts on the prestige of journals, taking little account of other possible factors. Information concerning such factors is in fact often not available. This paper argues for the establishment of methods for benchmarking scientific journals, taking into account a wider range of journal performance parameters than is currently available. A model for how prospective authors determine the value of submitting to a particular journal is presented. The model includes eight factors that influence an author’s decision and 21 other underlying factors. The model is a qualitative one. The method proposes to benchmark groups of journals by application of the factors. Initial testing of the method has been undertaken in one discipline.
Resumo:
First, in Essay 1, we test whether it is possible to forecast Finnish Options Index return volatility by examining the out-of-sample predictive ability of several common volatility models with alternative well-known methods; and find additional evidence for the predictability of volatility and for the superiority of the more complicated models over the simpler ones. Secondly, in Essay 2, the aggregated volatility of stocks listed on the Helsinki Stock Exchange is decomposed into a market, industry-and firm-level component, and it is found that firm-level (i.e., idiosyncratic) volatility has increased in time, is more substantial than the two former, predicts GDP growth, moves countercyclically and as well as the other components is persistent. Thirdly, in Essay 3, we are among the first in the literature to seek for firm-specific determinants of idiosyncratic volatility in a multivariate setting, and find for the cross-section of stocks listed on the Helsinki Stock Exchange that industrial focus, trading volume, and block ownership, are positively associated with idiosyncratic volatility estimates––obtained from both the CAPM and the Fama and French three-factor model with local and international benchmark portfolios––whereas a negative relation holds between firm age as well as size and idiosyncratic volatility.
Resumo:
Mutual funds have increased in popularity among Finnish investors in recent years. In this study returns on domestic funds have been decomposed into several elements that measure different aspects of fund performance. The results indicate that fund managers in the long run tend to allocate fund capital between different stock categories in a profitable way. When it comes to the short term timing of their allocation decisions they are however unable to further improve overall performance. The evidence also suggests that managers possess the ability to pick above average performing stocks within the individual stock categories. During the investigated period most funds returned more than a broad benchmark index even after fees and indirect costs were taken into account.
Resumo:
The objective of this paper is to investigate the pricing accuracy under stochastic volatility where the volatility follows a square root process. The theoretical prices are compared with market price data (the German DAX index options market) by using two different techniques of parameter estimation, the method of moments and implicit estimation by inversion. Standard Black & Scholes pricing is used as a benchmark. The results indicate that the stochastic volatility model with parameters estimated by inversion using the available prices on the preceding day, is the most accurate pricing method of the three in this study and can be considered satisfactory. However, as the same model with parameters estimated using a rolling window (the method of moments) proved to be inferior to the benchmark, the importance of stable and correct estimation of the parameters is evident.
Resumo:
Partition of unity methods, such as the extended finite element method, allows discontinuities to be simulated independently of the mesh (Int. J. Numer. Meth. Engng. 1999; 45:601-620). This eliminates the need for the mesh to be aligned with the discontinuity or cumbersome re-meshing, as the discontinuity evolves. However, to compute the stiffness matrix of the elements intersected by the discontinuity, a subdivision of the elements into quadrature subcells aligned with the discontinuity is commonly adopted. In this paper, we use a simple integration technique, proposed for polygonal domains (Int. J. Nuttier Meth. Engng 2009; 80(1):103-134. DOI: 10.1002/nme.2589) to suppress the need for element subdivision. Numerical results presented for a few benchmark problems in the context of linear elastic fracture mechanics and a multi-material problem show that the proposed method yields accurate results. Owing to its simplicity, the proposed integration technique can be easily integrated in any existing code. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
In recent years, thanks to developments in information technology, large-dimensional datasets have been increasingly available. Researchers now have access to thousands of economic series and the information contained in them can be used to create accurate forecasts and to test economic theories. To exploit this large amount of information, researchers and policymakers need an appropriate econometric model.Usual time series models, vector autoregression for example, cannot incorporate more than a few variables. There are two ways to solve this problem: use variable selection procedures or gather the information contained in the series to create an index model. This thesis focuses on one of the most widespread index model, the dynamic factor model (the theory behind this model, based on previous literature, is the core of the first part of this study), and its use in forecasting Finnish macroeconomic indicators (which is the focus of the second part of the thesis). In particular, I forecast economic activity indicators (e.g. GDP) and price indicators (e.g. consumer price index), from 3 large Finnish datasets. The first dataset contains a large series of aggregated data obtained from the Statistics Finland database. The second dataset is composed by economic indicators from Bank of Finland. The last dataset is formed by disaggregated data from Statistic Finland, which I call micro dataset. The forecasts are computed following a two steps procedure: in the first step I estimate a set of common factors from the original dataset. The second step consists in formulating forecasting equations including the factors extracted previously. The predictions are evaluated using relative mean squared forecast error, where the benchmark model is a univariate autoregressive model. The results are dataset-dependent. The forecasts based on factor models are very accurate for the first dataset (the Statistics Finland one), while they are considerably worse for the Bank of Finland dataset. The forecasts derived from the micro dataset are still good, but less accurate than the ones obtained in the first case. This work leads to multiple research developments. The results here obtained can be replicated for longer datasets. The non-aggregated data can be represented in an even more disaggregated form (firm level). Finally, the use of the micro data, one of the major contributions of this thesis, can be useful in the imputation of missing values and the creation of flash estimates of macroeconomic indicator (nowcasting).
Resumo:
Embryonic stem cells offer potentially a ground-breaking insight into health and diseases and are said to offer hope in discovering cures for many ailments unimaginable few years ago. Human embryonic stem cells are undifferentiated, immature cells that possess an amazing ability to develop into almost any body cell such as heart muscle, bone, nerve and blood cells and possibly even organs in due course. This remarkable feature, enabling embryonic stem cells to proliferate indefinitely in vitro (in a test tube), has branded them as a so-called miracle cure . Their potential use in clinical applications provides hope to many sufferers of debilitating and fatal medical conditions. However, the emergence of stem cell research has resulted in intense debates about its promises and dangers. On the one hand, advocates hail its potential, ranging from alleviating and even curing fatal and debilitating diseases such as Parkinson s, diabetes, heart ailments and so forth. On the other hand, opponents decry its dangers, drawing attention to the inherent risks of human embryo destruction, cloning for research purposes and reproductive cloning eventually. Lately, however, the policy battles surrounding human embryonic stem cell innovation have shifted from being a controversial research to scuffles within intellectual property rights. In fact, the ability to obtain patents represents a pivotal factor in the economic success or failure of this new biotechnology. Although, stem cell patents tend to more or less satisfy the standard patentability requirements, they also raise serious ethical and moral questions about the meaning of the exclusions on ethical or moral grounds as found in European and to an extent American and Australian patent laws. At present there is a sort of a calamity over human embryonic stem cell patents in Europe and to an extent in Australia and the United States. This in turn has created a sense of urgency to engage all relevant parties in the discourse on how best to approach patenting of this new form of scientific innovation. In essence, this should become a highly favoured patenting priority. To the contrary, stem cell innovation and its reliance on patent protection risk turmoil, uncertainty, confusion and even a halt on not only stem cell research but also further emerging biotechnology research and development. The patent system is premised upon the fundamental principle of balance which ought to ensure that the temporary monopoly awarded to the inventor equals that of the social benefit provided by the disclosure of the invention. Ensuring and maintaining this balance within the patent system when patenting human embryonic stem cells is of crucial contemporary relevance. Yet, the patenting of human embryonic stem cells raises some fundamental moral, social and legal questions. Overall, the present approach of patenting human embryonic stem cell related inventions is unsatisfactory and ineffective. This draws attention to a specific question which provides for a conceptual framework for this work. That question is the following: how can the investigated patent offices successfully deal with patentability of human embryonic stem cells? This in turn points at the thorny issue of application of the morality clause in this field. In particular, the interpretation of the exclusions on ethical or moral grounds as found in Australian, American and European legislative and judicial precedents. The Thesis seeks to compare laws and legal practices surrounding patentability of human embryonic stem cells in Australia and the United States with that of Europe. By using Europe as the primary case study for lessons and guidance, the central goal of the Thesis then becomes the determination of the type of solutions available to Europe with prospects to apply such to Australia and the United States. The Dissertation purports to define the ethical implications that arise with patenting human embryonic stem cells and intends to offer resolutions to the key ethical dilemmas surrounding patentability of human embryonic stem cells and other morally controversial biotechnology inventions. In particular, the Thesis goal is to propose a functional framework that may be used as a benchmark for an informed discussion on the solution to resolving ethical and legal tensions that come with patentability of human embryonic stem cells in Australian, American and European patent worlds. Key research questions that arise from these objectives and which continuously thread throughout the monograph are: 1. How do common law countries such as Australia and the United States approach and deal with patentability of human embryonic stem cells in their jurisdictions? These practices are then compared to the situation in Europe as represented by the United Kingdom (first two chapters), the Court of Justice of the European Union and the European Patent Office decisions (Chapter 3 onwards) in order to obtain a full picture of the present patenting procedures on the European soil. 2. How are ethical and moral considerations taken into account at patent offices investigated when assessing patentability of human embryonic stem cell related inventions? In order to assess this part, the Thesis evaluates how ethical issues that arise with patent applications are dealt with by: a) Legislative history of the modern patent system from its inception in 15th Century England to present day patent laws. b) Australian, American and European patent offices presently and in the past, including other relevant legal precedents on the subject matter. c) Normative ethical theories. d) The notion of human dignity used as the lowest common denominator for the interpretation of the European morality clause. 3. Given the existence of the morality clause in form of Article 6(1) of the Directive 98/44/EC of the European Parliament and of the Council of 6 July 1998 on the legal protection of biotechnological inventions which corresponds to Article 53(a) European Patent Convention, a special emphasis is put on Europe as a guiding principle for Australia and the United States. Any room for improvement of the European morality clause and Europe s current manner of evaluating ethical tensions surrounding human embryonic stem cell inventions is examined. 4. A summary of options (as represented by Australia, the United States and Europe) available as a basis for the optimal examination procedure of human embryonic stem cell inventions is depicted, whereas the best of such alternatives is deduced in order to create a benchmark framework. This framework is then utilised on and promoted as a tool to assist Europe (as represented by the European Patent Office) in examining human embryonic stem cell patent applications. This method suggests a possibility of implementing an institution solution. 5. Ultimately, a question of whether such reformed European patent system can be used as a founding stone for a potential patent reform in Australia and the United States when examining human embryonic stem cells or other morally controversial inventions is surveyed. The author wishes to emphasise that the guiding thought while carrying out this work is to convey the significance of identifying, analysing and clarifying the ethical tensions surrounding patenting human embryonic stem cells and ultimately present a solution that adequately assesses patentability of human embryonic stem cell inventions and related biotechnologies. In answering the key questions above, the Thesis strives to contribute to the broader stem cell debate about how and to which extent ethical and social positions should be integrated into the patenting procedure in pluralistic and morally divided democracies of Europe and subsequently Australia and the United States.
Resumo:
The Integrated Force Method (IFM) is a novel matrix formulation developed for analyzing the civil, mechanical and aerospace engineering structures. In this method all independent/internal forces are treated as unknown variables which are calculated by simultaneously imposing equations of equilibrium and compatibility conditions. This paper presents a new 12-node serendipity quadrilateral plate bending element MQP12 for the analysis of thin and thick plate problems using IFM. The Mindlin-Reissner plate theory has been employed in the formulation which accounts the effect of shear deformation. The performance of this new element with respect to accuracy and convergence is studied by analyzing many standard benchmark plate bending problems. The results of the new element MQP12 are compared with those of displacement-based 12-node plate bending elements available in the literature. The results are also compared with exact solutions. The new element MQP12 is free from shear locking and performs excellent for both thin and moderately thick plate bending situations.
Resumo:
An efficient algorithm within the finite deformation framework is developed for finite element implementation of a recently proposed isotropic, Mohr-Coulomb type material model, which captures the elastic-viscoplastic, pressure sensitive and plastically dilatant response of bulk metallic glasses. The constitutive equations are first reformulated and implemented using an implicit numerical integration procedure based on the backward Euler method. The resulting system of nonlinear algebraic equations is solved by the Newton-Raphson procedure. This is achieved by developing the principal space return mapping technique for the present model which involves simultaneous shearing and dilatation on multiple potential slip systems. The complete stress update algorithm is presented and the expressions for viscoplastic consistent tangent moduli are derived. The stress update scheme and the viscoplastic consistent tangent are implemented in the commercial finite element code ABAQUS/Standard. The accuracy and performance of the numerical implementation are verified by considering several benchmark examples, which includes a simulation of multiple shear bands in a 3D prismatic bar under uniaxial compression.
Resumo:
We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.
Resumo:
In this work, we present a new monolithic strategy for solving fluid-structure interaction problems involving incompressible fluids, within the context of the finite element method. This strategy, similar to the continuum dynamics, conserves certain properties, and thus provides a rational basis for the design of the time-stepping strategy; detailed proofs of the conservation of these properties are provided. The proposed algorithm works with displacement and velocity variables for the structure and fluid, respectively, and introduces no new variables to enforce velocity or traction continuity. Any existing structural dynamics algorithm can be used without change in the proposed method. Use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. An analytical solution is presented for one of the benchmark problems used in the literature, namely, the piston problem. A number of benchmark problems including problems involving free surfaces such as sloshing and the breaking dam problem are used to demonstrate the good performance of the proposed method. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
his paper addresses the problem of minimizing the number of columns with superdiagonal nonzeroes (viz., spiked columns) in a square, nonsingular linear system of equations which is to be solved by Gaussian elimination. The exact focus is on a class of min-spike heuristics in which the rows and columns of the coefficient matrix are first permuted to block lower-triangular form. Subsequently, the number of spiked columns in each irreducible block and their heights above the diagonal are minimized heuristically. We show that ifevery column in an irreducible block has exactly two nonzeroes, i.e., is a doubleton, then there is exactly one spiked column. Further, if there is at least one non-doubleton column, there isalways an optimal permutation of rows and columns under whichnone of the doubleton columns are spiked. An analysis of a few benchmark linear programs suggests that singleton and doubleton columns can abound in practice. Hence, it appears that the results of this paper can be practically useful. In the rest of the paper, we develop a polynomial-time min-spike heuristic based on the above results and on a graph-theoretic interpretation of doubleton columns.
Resumo:
We propose a family of 3D versions of a smooth finite element method (Sunilkumar and Roy 2010), wherein the globally smooth shape functions are derivable through the condition of polynomial reproduction with the tetrahedral B-splines (DMS-splines) or tensor-product forms of triangular B-splines and ID NURBS bases acting as the kernel functions. While the domain decomposition is accomplished through tetrahedral or triangular prism elements, an additional requirement here is an appropriate generation of knotclouds around the element vertices or corners. The possibility of sensitive dependence of numerical solutions to the placements of knotclouds is largely arrested by enforcing the condition of polynomial reproduction whilst deriving the shape functions. Nevertheless, given the higher complexity in forming the knotclouds for tetrahedral elements especially when higher demand is placed on the order of continuity of the shape functions across inter-element boundaries, we presently emphasize an exploration of the triangular prism based formulation in the context of several benchmark problems of interest in linear solid mechanics. In the absence of a more rigorous study on the convergence analyses, the numerical exercise, reported herein, helps establish the method as one of remarkable accuracy and robust performance against numerical ill-conditioning (such as locking of different kinds) vis-a-vis the conventional FEM.