933 resultados para data model


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The principle aim of this research is to elucidate the factors driving the total rate of return of non-listed funds using a panel data analytical framework. In line with previous results, we find that core funds exhibit lower yet more stable returns than value-added and, in particular, opportunistic funds, both cross-sectionally and over time. After taking into account overall market exposure, as measured by weighted market returns, the excess returns of value-added and opportunity funds are likely to stem from: high leverage, high exposure to development, active asset management and investment in specialized property sectors. A random effects estimation of the panel data model largely confirms the findings obtained from the fixed effects model. Again, the country and sector property effect shows the strongest significance in explaining total returns. The stock market variable is negative which hints at switching effects between competing asset classes. For opportunity funds, on average, the returns attributable to gearing are three times higher than those for value added funds and over five times higher than for core funds. Overall, there is relatively strong evidence indicating that country and sector allocation, style, gearing and fund size combinations impact on the performance of unlisted real estate funds.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We describe ncWMS, an implementation of the Open Geospatial Consortium’s Web Map Service (WMS) specification for multidimensional gridded environmental data. ncWMS can read data in a large number of common scientific data formats – notably the NetCDF format with the Climate and Forecast conventions – then efficiently generate map imagery in thousands of different coordinate reference systems. It is designed to require minimal configuration from the system administrator and, when used in conjunction with a suitable client tool, provides end users with an interactive means for visualizing data without the need to download large files or interpret complex metadata. It is also used as a “bridging” tool providing interoperability between the environmental science community and users of geographic information systems. ncWMS implements a number of extensions to the WMS standard in order to fulfil some common scientific requirements, including the ability to generate plots representing timeseries and vertical sections. We discuss these extensions and their impact upon present and future interoperability. We discuss the conceptual mapping between the WMS data model and the data models used by gridded data formats, highlighting areas in which the mapping is incomplete or ambiguous. We discuss the architecture of the system and particular technical innovations of note, including the algorithms used for fast data reading and image generation. ncWMS has been widely adopted within the environmental data community and we discuss some of the ways in which the software is integrated within data infrastructures and portals.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The CHARMe project enables the annotation of climate data with key pieces of supporting information that we term “commentary”. Commentary reflects the experience that has built up in the user community, and can help new or less-expert users (such as consultants, SMEs, experts in other fields) to understand and interpret complex data. In the context of global climate services, the CHARMe system will record, retain and disseminate this commentary on climate datasets, and provide a means for feeding back this experience to the data providers. Based on novel linked data techniques and standards, the project has developed a core system, data model and suite of open-source tools to enable this information to be shared, discovered and exploited by the community.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we present an algorithm for cluster analysis that integrates aspects from cluster ensemble and multi-objective clustering. The algorithm is based on a Pareto-based multi-objective genetic algorithm, with a special crossover operator, which uses clustering validation measures as objective functions. The algorithm proposed can deal with data sets presenting different types of clusters, without the need of expertise in cluster analysis. its result is a concise set of partitions representing alternative trade-offs among the objective functions. We compare the results obtained with our algorithm, in the context of gene expression data sets, to those achieved with multi-objective Clustering with automatic K-determination (MOCK). the algorithm most closely related to ours. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Noetica is a tool for structuring knowledge about concepts and the reIationships between them. It differs from typical information systems in that the knowledge it represents is abstract, highly connected, and includes meta-knowledge (knowledge about knowledge). Noetica represents knowledge using a strongly typed graph data model. By providing a rich type system it is possible to represent conceptual information using formalized structures. A class hierarchy provides a basic classification for all objects. This allows for a consistency of representation that is not often found in `free' semantic networks, and gives the ability to easily extend a knowledge model while retaining its semantics. Visualization and query tools are provided for this data model. Visualization can be used to explore complete sets of link-classes, show paths while navigating through the database, or visualize the results of queries. Noetica supports goal-directed queries (a series of user-supplied goals that the system attempts to satisfy in sequence) and pathfinding queries (where the system finds relationships between objects in the database by following links).

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper, we show that in the proposed models for economic growth, the financial system variables are generally nonparametric. We, thus, use a nonparametric panel data model to estimate the financial system-economic growth relationship. Our results suggest that as long as a country's domestic credit and private credit are above their cross-sectional mean they have a positive effect on GDP growth. We also discover that market capitalisation positively and significantly impacts GDP growth, while stocks traded (with the exception of OECD countries) has a statistically insignificant effect on GDP growth.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper looks at the Humanities Networked Infrastructure (HuNI), a service which aggregates data from thirty Australian data sources and makes them available for use by researchers across the humanities and creative arts. We discuss the methods used by HuNI to aggregate data, as well as the conceptual framework which has shaped the design of HuNI’s Data Model around six core entity types. Two of the key functions available to users of HuNI – building collections and creating links – are discussed, together with their design rationale.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The strategic management of information plays a fundamental role in the organizational management process since the decision-making process depend on the need for survival in a highly competitive market. Companies are constantly concerned about information transparency and good practices of corporate governance (CG) which, in turn, directs relations between the controlling power of the company and investors. In this context, this article presents the relationship between the disclosing of information of joint-stock companies by means of using XBRL, the open data model adopted by the Brazilian government, a model that boosted the publication of Information Access Law (Lei de Acesso à Informação), nº 12,527 of 18 November 2011. Information access should be permeated by a mediation policy in order to subsidize the knowledge construction and decision-making of investors. The XBRL is the main model for the publishing of financial information. The use of XBRL by means of new semantic standard created for Linked Data, strengthens the information dissemination, as well as creates analysis mechanisms and cross-referencing of data with different open databases available on the Internet, providing added value to the data/information accessed by civil society.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background: Infant mortality is an important measure of human development, related to the level of welfare of a society. In order to inform public policy, various studies have tried to identify the factors that influence, at an aggregated level, infant mortality. The objective of this paper is to analyze the regional pattern of infant mortality in Brazil, evaluating the effect of infrastructure, socio-economic, and demographic variables to understand its distribution across the country. Methods: Regressions including socio-economic and living conditions variables are conducted in a structure of panel data. More specifically, a spatial panel data model with fixed effects and a spatial error autocorrelation structure is used to help to solve spatial dependence problems. The use of a spatial modeling approach takes into account the potential presence of spillovers between neighboring spatial units. The spatial units considered are Minimum Comparable Areas, defined to provide a consistent definition across Census years. Data are drawn from the 1980, 1991 and 2000 Census of Brazil, and from data collected by the Ministry of Health (DATASUS). In order to identify the influence of health care infrastructure, variables related to the number of public and private hospitals are included. Results: The results indicate that the panel model with spatial effects provides the best fit to the data. The analysis confirms that the provision of health care infrastructure and social policy measures (e. g. improving education attainment) are linked to reduced rates of infant mortality. An original finding concerns the role of spatial effects in the analysis of IMR. Spillover effects associated with health infrastructure and water and sanitation facilities imply that there are regional benefits beyond the unit of analysis. Conclusions: A spatial modeling approach is important to produce reliable estimates in the analysis of panel IMR data. Substantively, this paper contributes to our understanding of the physical and social factors that influence IMR in the case of a developing country.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The need for a convergence between semi-structured data management and Information Retrieval techniques is manifest to the scientific community. In order to fulfil this growing request, W3C has recently proposed XQuery Full Text, an IR-oriented extension of XQuery. However, the issue of query optimization requires the study of important properties like query equivalence and containment; to this aim, a formal representation of document and queries is needed. The goal of this thesis is to establish such formal background. We define a data model for XML documents and propose an algebra able to represent most of XQuery Full-Text expressions. We show how an XQuery Full-Text expression can be translated into an algebraic expression and how an algebraic expression can be optimized.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The recent liberalization of the German energy market has forced the energy industry to develop and install new information systems to support agents on the energy trading floors in their analytical tasks. Besides classical approaches of building a data warehouse giving insight into the time series to understand market and pricing mechanisms, it is crucial to provide a variety of external data from the web. Weather information as well as political news or market rumors are relevant to give the appropriate interpretation to the variables of a volatile energy market. Starting from a multidimensional data model and a collection of buy and sell transactions a data warehouse is built that gives analytical support to the agents. Following the idea of web farming we harvest the web, match the external information sources after a filtering and evaluation process to the data warehouse objects, and present this qualified information on a user interface where market values are correlated with those external sources over the time axis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper is concerned with the analysis of zero-inflated count data when time of exposure varies. It proposes a modified zero-inflated count data model where the probability of an extra zero is derived from an underlying duration model with Weibull hazard rate. The new model is compared to the standard Poisson model with logit zero inflation in an application to the effect of treatment with thiotepa on the number of new bladder tumors.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate a stochastic-frontier panel-data model specifying a translog cost function, covering 1995 to 2003. The results disagree with previous research in that we find little evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, the results also show that self-management of a REIT associates with more inefficiency when we measure output with assets. When we use revenue to measure output, selfmanagement associates with less inefficiency. Also contrary with previous research, higher leverage associates with more efficiency. The results further suggest that inefficiency increases over time in three of our four specifications.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.