960 resultados para Discrete Data Models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This doctoral thesis aims at contributing to the literature on transition economies focusing on the Russian Federations and in particular on regional income convergence and fertility patterns. The first two chapter deal with the issue of income convergence across regions. Chapter 1 provides an historical-institutional analysis of the period between the late years of the Soviet Union and the last decade of economic growth and a presentation of the sample with a description of gross regional product composition, agrarian or industrial vocation, labor. Chapter 2 contributes to the literature on exploratory spatial data analysis with a application to a panel of 77 regions in the period 1994-2008. It provides an analysis of spatial patterns and it extends the theoretical framework of growth regressions controlling for spatial correlation and heterogeneity. Chapter 3 analyses the national demographic patterns since 1960 and provides a review of the policies on maternity leave and family benefits. Data sources are the Statistical Yearbooks of USSR, the Statistical Yearbooks of the Russian Soviet Federative Socialist Republic and the Demographic Yearbooks of Russia. Chapter 4 analyses the demographic patterns in light of the theoretical framework of the Becker model, the Second Demographic Transition and an economic-crisis argument. With national data from 1960, the theoretically issue of the pro or countercyclical relation between income and fertility is graphically analyzed and discussed, together with female employment and education. With regional data after 1994 different panel data models are tested. Individual level data from the Russian Longitudinal Monitoring Survey are employed using the logit model. Chapter 5 employs data from the Generations and Gender Survey by UNECE to focus on postponement and second births intentions. Postponement is studied through cohort analysis of mean maternal age at first birth, while the methodology used for second birth intentions is the ordered logit model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During this work has been developed an innovative methodology for continuous and in situ gas monitoring (24/24 h) of fumarolic and soil diffusive emissions applied to the geothermal and volcanic area of Pisciarelli near Agnano inside the Campi Flegrei caldera (CFc). In literature there are only scattered and in discrete data of the geochemical gas composition of fumarole at Campi Flegrei; it is only since the early ’80 that exist a systematic record of fumaroles with discrete sampling at Solfatara (Bocca Grande and Bocca Nuova fumaroles) and since 1999, even at the degassing areas of Pisciarelli. This type of sampling has resulted in a time series of geochemical analysis with discontinuous periods of time set (in average 2-3 measurements per month) completely inadequate for the purposes of Civil Defence in such high volcanic risk and densely populated areas. For this purpose, and to remedy this lack of data, during this study was introduced a new methodology of continuous and in situ sampling able to continuously detect data related and from its soil diffusive degassing. Due to its high sampling density (about one measurement per minute therefore producing 1440 data daily) and numerous species detected (CO2, Ar, 36Ar, CH4, He, H2S, N2, O2) allowing a good statistic record and the reconstruction of the gas composition evolution of the investigated area. This methodology is based on continuous sampling of fumaroles gases and soil degassing using an extraction line, which after undergoing a series of condensation processes of the water vapour content - better described hereinafter - is analyzed through using a quadrupole mass spectrometer

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Sub-Saharan Africa, non-democratic events, like civil wars and coup d'etat, destroy economic development. This study investigates both domestic and spatial effects on the likelihood of civil wars and coup d'etat. To civil wars, an increase of income growth is one of common research conclusions to stop wars. This study adds a concern on ethnic fractionalization. IV-2SLS is applied to overcome causality problem. The findings document that income growth is significant to reduce number and degree of violence in high ethnic fractionalized countries, otherwise they are trade-off. Income growth reduces amount of wars, but increases its violent level, in the countries with few large ethnic groups. Promoting growth should consider ethnic composition. This study also investigates the clustering and contagion of civil wars using spatial panel data models. Onset, incidence and end of civil conflicts spread across the network of neighboring countries while peace, the end of conflicts, diffuse only with the nearest neighbor. There is an evidence of indirect links from neighboring income growth, without too much inequality, to reduce the likelihood of civil wars. To coup d'etat, this study revisits its diffusion for both all types of coups and only successful ones. The results find an existence of both domestic and spatial determinants in different periods. Domestic income growth plays major role to reduce the likelihood of coup before cold war ends, while spatial effects do negative afterward. Results on probability to succeed coup are similar. After cold war ends, international organisations seriously promote democracy with pressure against coup d'etat, and it seems to be effective. In sum, this study indicates the role of domestic ethnic fractionalization and the spread of neighboring effects to the likelihood of non-democratic events in a country. Policy implementation should concern these factors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A method is given for proving efficiency of NPMLE directly linked to empirical process theory. The conditions in general are appropriate consistency of the NPMLE, differentiability of the model, differentiability of the parameter of interest, local convexity of the parameter space, and a Donsker class condition for the class of efficient influence functions obtained by varying the parameters. For the case that the model is linear in the parameter and the parameter space is convex, as with most nonparametric missing data models, we show that the method leads to an identity for the NPMLE which almost says that the NPMLE is efficient and provides us straightforwardly with a consistency and efficiency proof. This identify is extended to an almost linear class of models which contain biased sampling models. To illustrate, the method is applied to the univariate censoring model, random truncation models, interval censoring case I model, the class of parametric models and to a class of semiparametric models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a retrospective multicentre study, the success rate and efficiency of activator treatment were analysed. All patients from two University clinics (Giessen, Germany and Berne, Switzerland) that fulfilled the selection criteria (Class II division 1 malocclusion, activator treatment, no aplasia, no extraction of permanent teeth, no syndromes, no previous orthodontic treatment except transverse maxillary expansion, full available records) were included in the study. The subject material amounted to 222 patients with a mean age of 10.6 years. Patient records, lateral head films, and dental casts were evaluated. Treatment was classified as successful if the molar relationship improved by at least half to three-fourths cusp width depending on whether or not the leeway space was used during treatment. Group comparisons were carried out using Wilcoxon two-sample and Kruskal-Wallis tests. For discrete data, chi-square analysis was used and Fisher's exact test when the sample size was small. Stepwise logistic regression was also employed. The success rate was 64 per cent in Giessen and 66 per cent in Berne. The only factor that significantly (P < 0.001) influenced treatment success was the level of co-operation. In approximately 27 per cent of the patients at both centres, the post-treatment occlusion was an 'ideal' Class I. In an additional 38 per cent of the patients, marked improvements in occlusal relationships were found. In subjects with Class II division 1 malocclusions, in which orthodontic treatment is performed by means of activators, a marked improvement of the Class II dental arch relationships can be expected in approximately 65 per cent of subjects. Activator treatment is more efficient in the late than in the early mixed dentition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The report explores the problem of detecting complex point target models in a MIMO radar system. A complex point target is a mathematical and statistical model for a radar target that is not resolved in space, but exhibits varying complex reflectivity across the different bistatic view angles. The complex reflectivity can be modeled as a complex stochastic process whose index set is the set of all the bistatic view angles, and the parameters of the stochastic process follow from an analysis of a target model comprising a number of ideal point scatterers randomly located within some radius of the targets center of mass. The proposed complex point targets may be applicable to statistical inference in multistatic or MIMO radar system. Six different target models are summarized here – three 2-dimensional (Gaussian, Uniform Square, and Uniform Circle) and three 3-dimensional (Gaussian, Uniform Cube, and Uniform Sphere). They are assumed to have different distributions on the location of the point scatterers within the target. We develop data models for the received signals from such targets in the MIMO radar system with distributed assets and partially correlated signals, and consider the resulting detection problem which reduces to the familiar Gauss-Gauss detection problem. We illustrate that the target parameter and transmit signal have an influence on the detector performance through target extent and the SNR respectively. A series of the receiver operator characteristic (ROC) curves are generated to notice the impact on the detector for varying SNR. Kullback–Leibler (KL) divergence is applied to obtain the approximate mean difference between density functions the scatterers assume inside the target models to show the change in the performance of the detector with target extent of the point scatterers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

At the time when at least two-thirds of the US states have already mandated some form of seller's property condition disclosure statement and there is a movement in this direction nationally, this paper examines the impact of seller's property condition disclosure law on the residential real estate values, the information asymmetry in housing transactions and shift of risk from buyers and brokers to the sellers, and attempts to ascertain the factors that lead to adoption of the disclosur law. The analytical structure employs parametric panel data models, semi-parametric propensity score matching models, and an event study framework using a unique set of economic and institutional attributes for a quarterly panel of 291 US Metropolitan Statistical Areas (MSAs) and 50 US States spanning 21 years from 1984 to 2004. Exploiting the MSA level variation in house prices, the study finds that the average seller may be able to fetch a higher price (about three to four percent) for the house if she furnishes a state-mandated seller's property condition disclosure statement to the buyer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate stochastic-frontier, panel-data models specifying a translog cost function. The specified model updates the cost frontier with new information as it becomes available over time. The model can identify frontier cost improvements, returns to scale, and cost inefficiencies over time. The results disagree with most previous research in that we find no evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, higher leverage associates with more efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heavy (magnetic & non-magnetic) minerals are found concentrated by natural processes in many fluvial, estuarine, coastal and shelf environments with a potential to form economic placer deposits. Understanding the processes of heavy mineral transport and enrichment is prerequisite to interpret sediment magnetic properties in terms of hydro- and sediment dynamics. In this study, we combine rock magnetic and sedimentological laboratory measurements with numerical 3D discrete element models to investigate differential grain entrainment and transport rates of magnetic minerals in a range of coastal environments (riverbed, mouth, estuary, beach and near-shore). We analyzed grain-size distributions of representative bulk samples and their magnetic mineral fractions to relate grain-size modes to respective transport modes (traction, saltation, suspension). Rock magnetic measurements showed that distribution shapes, population sizes and grain-size offsets of bulk and magnetic mineral fractions hold information on the transport conditions and enrichment process in each depositional environment. A downstream decrease in magnetite grain size and an increase in magnetite concentration was observed from riverine source to marine sink environments. Lower flow velocities permit differential settling of light and heavy mineral grains creating heavy mineral enriched zones in estuary settings, while lighter minerals are washed out further into the sea. Numerical model results showed that higher heavy mineral concentrations in the bed increased the erosion rate and enhancing heavy mineral enrichment. In beach environments where sediments contained light and heavy mineral grains of equivalent grain sizes, the bed was found to be more stable with negligible amount of erosion compared to other bed compositions. Heavy mineral transport rates calculated for four different bed compositions showed that increasing heavy mineral content in the bed decreased the transport rate. There is always a lag in transport between light and heavy minerals which increases with higher heavy mineral concentration in all tested bed compositions. The results of laboratory experiments were validated by numerical models and showed good agreement. We demonstrate that the presented approach bears the potential to investigate heavy mineral enrichment processes in a wide range of sedimentary settings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La Internet de las Cosas (IoT), como parte de la Futura Internet, se ha convertido en la actualidad en uno de los principales temas de investigación; en parte gracias a la atención que la sociedad está poniendo en el desarrollo de determinado tipo de servicios (telemetría, generación inteligente de energía, telesanidad, etc.) y por las recientes previsiones económicas que sitúan a algunos actores, como los operadores de telecomunicaciones (que se encuentran desesperadamente buscando nuevas oportunidades), al frente empujando algunas tecnologías interrelacionadas como las comunicaciones Máquina a Máquina (M2M). En este contexto, un importante número de actividades de investigación a nivel mundial se están realizando en distintas facetas: comunicaciones de redes de sensores, procesado de información, almacenamiento de grandes cantidades de datos (big--‐data), semántica, arquitecturas de servicio, etc. Todas ellas, de forma independiente, están llegando a un nivel de madurez que permiten vislumbrar la realización de la Internet de las Cosas más que como un sueño, como una realidad tangible. Sin embargo, los servicios anteriormente mencionados no pueden esperar a desarrollarse hasta que las actividades de investigación obtengan soluciones holísticas completas. Es importante proporcionar resultados intermedios que eviten soluciones verticales realizadas para desarrollos particulares. En este trabajo, nos hemos focalizado en la creación de una plataforma de servicios que pretende facilitar, por una parte la integración de redes de sensores y actuadores heterogéneas y geográficamente distribuidas, y por otra lado el desarrollo de servicios horizontales utilizando dichas redes y la información que proporcionan. Este habilitador se utilizará para el desarrollo de servicios y para la experimentación en la Internet de las Cosas. Previo a la definición de la plataforma, se ha realizado un importante estudio focalizando no sólo trabajos y proyectos de investigación, sino también actividades de estandarización. Los resultados se pueden resumir en las siguientes aseveraciones: a) Los modelos de datos definidos por el grupo “Sensor Web Enablement” (SWE™) del “Open Geospatial Consortium (OGC®)” representan hoy en día la solución más completa para describir las redes de sensores y actuadores así como las observaciones. b) Las interfaces OGC, a pesar de las limitaciones que requieren cambios y extensiones, podrían ser utilizadas como las bases para acceder a sensores y datos. c) Las redes de nueva generación (NGN) ofrecen un buen sustrato que facilita la integración de redes de sensores y el desarrollo de servicios. En consecuencia, una nueva plataforma de Servicios, llamada Ubiquitous Sensor Networks (USN), se ha definido en esta Tesis tratando de contribuir a rellenar los huecos previamente mencionados. Los puntos más destacados de la plataforma USN son: a) Desde un punto de vista arquitectónico, sigue una aproximación de dos niveles (Habilitador y Gateway) similar a otros habilitadores que utilizan las NGN (como el OMA Presence). b) Los modelos de datos están basado en los estándares del OGC SWE. iv c) Está integrado en las NGN pero puede ser utilizado sin ellas utilizando infraestructuras IP abiertas. d) Las principales funciones son: Descubrimiento de sensores, Almacenamiento de observaciones, Publicacion--‐subscripcion--‐notificación, ejecución remota homogénea, seguridad, gestión de diccionarios de datos, facilidades de monitorización, utilidades de conversión de protocolos, interacciones síncronas y asíncronas, soporte para el “streaming” y arbitrado básico de recursos. Para demostrar las funcionalidades que la Plataforma USN propuesta pueden ofrecer a los futuros escenarios de la Internet de las Cosas, se presentan resultados experimentales de tres pruebas de concepto (telemetría, “Smart Places” y monitorización medioambiental) reales a pequeña escala y un estudio sobre semántica (sistema de información vehicular). Además, se está utilizando actualmente como Habilitador para desarrollar tanto experimentación como servicios reales en el proyecto Europeo SmartSantander (que aspira a integrar alrededor de 20.000 dispositivos IoT). v Abstract Internet of Things, as part of the Future Internet, has become one of the main research topics nowadays; in part thanks to the pressure the society is putting on the development of a particular kind of services (Smart metering, Smart Grids, eHealth, etc.), and by the recent business forecasts that situate some players, like Telecom Operators (which are desperately seeking for new opportunities), at the forefront pushing for some interrelated technologies like Machine--‐to--‐Machine (M2M) communications. Under this context, an important number of research activities are currently taking place worldwide at different levels: sensor network communications, information processing, big--‐ data storage, semantics, service level architectures, etc. All of them, isolated, are arriving to a level of maturity that envision the achievement of Internet of Things (IoT) more than a dream, a tangible goal. However, the aforementioned services cannot wait to be developed until the holistic research actions bring complete solutions. It is important to come out with intermediate results that avoid vertical solutions tailored for particular deployments. In the present work, we focus on the creation of a Service--‐level platform intended to facilitate, from one side the integration of heterogeneous and geographically disperse Sensors and Actuator Networks (SANs), and from the other the development of horizontal services using them and the information they provide. This enabler will be used for horizontal service development and for IoT experimentation. Prior to the definition of the platform, we have realized an important study targeting not just research works and projects, but also standardization topics. The results can be summarized in the following assertions: a) Open Geospatial Consortium (OGC®) Sensor Web Enablement (SWE™) data models today represent the most complete solution to describe SANs and observations. b) OGC interfaces, despite the limitations that require changes and extensions, could be used as the bases for accessing sensors and data. c) Next Generation Networks (NGN) offer a good substrate that facilitates the integration of SANs and the development of services. Consequently a new Service Layer platform, called Ubiquitous Sensor Networks (USN), has been defined in this Thesis trying to contribute to fill in the previous gaps. The main highlights of the proposed USN Platform are: a) From an architectural point of view, it follows a two--‐layer approach (Enabler and Gateway) similar to other enablers that run on top of NGN (like the OMA Presence). b) Data models and interfaces are based on the OGC SWE standards. c) It is integrated in NGN but it can be used without it over open IP infrastructures. d) Main functions are: Sensor Discovery, Observation Storage, Publish--‐Subscribe--‐Notify, homogeneous remote execution, security, data dictionaries handling, monitoring facilities, authorization support, protocol conversion utilities, synchronous and asynchronous interactions, streaming support and basic resource arbitration. vi In order to demonstrate the functionalities that the proposed USN Platform can offer to future IoT scenarios, some experimental results have been addressed in three real--‐life small--‐scale proofs--‐of concepts (Smart Metering, Smart Places and Environmental monitoring) and a study for semantics (in--‐vehicle information system). Furthermore we also present the current use of the proposed USN Platform as an Enabler to develop experimentation and real services in the SmartSantander EU project (that aims at integrating around 20.000 IoT devices).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a linear regression method for estimating Weibull parameters from life tests. The method uses stochastic models of the unreliability at each failure instant. As a result, a heteroscedastic regression problem arises that is solved by weighted least squares minimization. The main feature of our method is an innovative s-normalization of the failure data models, to obtain analytic expressions of centers and weights for the regression. The method has been Monte Carlo contrasted with Benard?s approximation, and Maximum Likelihood Estimation; and it has the highest global scores for its robustness, and performance.