963 resultados para COUNT DATA MODELS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Emergency department frequent users (EDFUs) account for a disproportionally high number of emergency department (ED) visits, contributing to overcrowding and high health-care costs. At the Lausanne University Hospital, EDFUs account for only 4.4% of ED patients, but 12.1% of all ED visits. Our study tested the hypothesis that an interdisciplinary case management intervention red. Methods: In this randomized controlled trial, we allocated adult EDFUs (5 or more visits in the previous 12 months) who visited the ED of the University Hospital of Lausanne, Switzerland between May 2012 and July 2013 either to an intervention (N=125) or a standard emergency care (N=125) group and monitored them for 12 months. Randomization was computer generated and concealed, and patients and research staff were blinded to the allocation. Participants in the intervention group, in addition to standard emergency care, received case management from an interdisciplinary team at baseline, and at 1, 3, and 5 months, in the hospital, in the ambulatory care setting, or at their homes. A generalized, linear, mixed-effects model for count data (Poisson distribution) was applied to compare participants' numbers of visits to the ED during the 12 months (Period 1, P1) preceding recruitment to the numbers of visits during the 12 months monitored (Period 2, P2).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reviews and extends our previous work to enable fast axonal diameter mapping from diffusion MRI data in the presence of multiple fibre populations within a voxel. Most of the existing mi-crostructure imaging techniques use non-linear algorithms to fit their data models and consequently, they are computationally expensive and usually slow. Moreover, most of them assume a single axon orientation while numerous regions of the brain actually present more complex configurations, e.g. fiber crossing. We present a flexible framework, based on convex optimisation, that enables fast and accurate reconstructions of the microstructure organisation, not limited to areas where the white matter is coherently oriented. We show through numerical simulations the ability of our method to correctly estimate the microstructure features (mean axon diameter and intra-cellular volume fraction) in crossing regions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Työn tavoitteena oli kehittää Stora Enso Oyj:n Imatran tehtaille Wedge-pohjainen analysointityökalu, jolla voidaan hallita prosessi- ja päästötietojen yhteys nykyistä paremmin. Wedgeen määriteltiin päästömittauksia, olennaisia prosessimittauksia ja tarpeelliset laskennat kuormituksen ennustamiseksi lähtien tuotantoprosessien tilasta. Työssä tehtiin kemialliselle ja biologiselle jätevedenpuhdistamolle meneville jätevesille laskennallisia malleja, joita verrattiin mitattuihin arvoihin. Kemialliselle jätevedenpuhdistamolle meneville jätevesille tehtiin malli jäteveden virtaamalle. Biologiselle jätevedenpuhdistamolle meneville jätevesille tehtiin mallit jäteveden virtaamalle sekä COD-, AOX- ja alkuainekuormituksille. Alkuaineista työhön otettiin mukaan natrium, rikki ja kloori. Teoriaosassa on käsitelty sellu- ja paperitehtaiden vedenkäyttöä, tehtaan eri osastojen jätevesikuormitusta, jäteveden puhdistusmenetelmiä sekä prosessidatan käsittelymenetelmiä. Kokeellisessa osassa on esitelty mitattujen ja laskennallisten mallien yhteyttä. Suurin osa laskennallisista malleista näyttää seuraavan kohtuullisen hyvin mitattuja arvoja. Kokeellisessa osassa on myös havainnollistettu esimerkkien avulla mallien hyödyntämistä. Työn hyötynä on normaalien kuormitusvaihteluiden ja häiriöpäästöjen entistä tarkempi ja nopeampi erottelu. Pitkällä tähtäimellä Wedge-ohjelman avulla pystytään keskittämään jätevesikuormituksen vähentämistoimenpiteet olennaisimpiin kohteisiin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis the main objective is to examine and model configuration system and related processes. When and where configuration information is created in product development process and how it is utilized in order-delivery process? These two processes are the essential part of the whole configuration system from the information point of view. Empirical part of the work was done as a constructive research inside a company that follows a mass customization approach. Data models and documentation are created for different development stages of the configuration system. A base data model already existed for new structures and relations between these structures. This model was used as the basis for the later data modeling work. Data models include different data structures, their key objects and attributes, and relations between. Representation of configuration rules for the to-be configuration system was defined as one of the key focus point. Further, it is examined how the customer needs and requirements information can be integrated into the product development process. Requirements hierarchy and classification system is presented. It is shown how individual requirement specifications can be connected for physical design structure via features by developing the existing base data model further.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The politics of intergovernmental transfers in Brazil. This article examines the political economy of public resources distribution in Brazil's federal system in 1985-2004. We propose an empirical exercise to analyze how the country's federal governments deal with the tradeoff between the provision of material wellbeing to sub-national governments (the states in our study) and the pursuit of political support from the latter. To identify the determinants of the transfer of resources from the federal government to the states, a set of economic, political, and institutional variables is econometrically tested. Based upon instrumental variables estimation for panel-data models, our estimates indicate that in Brazil the pursuit of political goals prevails over social equity and economic efficiency criteria: higher levels of per capita transfers are associated with the political makeup of governing coalitions, while larger investments in infrastructure and development by the states are associated with a lower amount of per capita resources transferred to sub-national governments. Our findings also suggest a trend toward the freezing of interregional inequalities in Brazil, and show the relevance of fiscal discipline laws in discouraging the use of the administrative apparatus for electioneering.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ma thèse est composée de trois essais sur l'inférence par le bootstrap à la fois dans les modèles de données de panel et les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peut être faible. La théorie asymptotique n'étant pas toujours une bonne approximation de la distribution d'échantillonnage des estimateurs et statistiques de tests, je considère le bootstrap comme une alternative. Ces essais tentent d'étudier la validité asymptotique des procédures bootstrap existantes et quand invalides, proposent de nouvelles méthodes bootstrap valides. Le premier chapitre #co-écrit avec Sílvia Gonçalves# étudie la validité du bootstrap pour l'inférence dans un modèle de panel de données linéaire, dynamique et stationnaire à effets fixes. Nous considérons trois méthodes bootstrap: le recursive-design bootstrap, le fixed-design bootstrap et le pairs bootstrap. Ces méthodes sont des généralisations naturelles au contexte des panels des méthodes bootstrap considérées par Gonçalves et Kilian #2004# dans les modèles autorégressifs en séries temporelles. Nous montrons que l'estimateur MCO obtenu par le recursive-design bootstrap contient un terme intégré qui imite le biais de l'estimateur original. Ceci est en contraste avec le fixed-design bootstrap et le pairs bootstrap dont les distributions sont incorrectement centrées à zéro. Cependant, le recursive-design bootstrap et le pairs bootstrap sont asymptotiquement valides quand ils sont appliqués à l'estimateur corrigé du biais, contrairement au fixed-design bootstrap. Dans les simulations, le recursive-design bootstrap est la méthode qui produit les meilleurs résultats. Le deuxième chapitre étend les résultats du pairs bootstrap aux modèles de panel non linéaires dynamiques avec des effets fixes. Ces modèles sont souvent estimés par l'estimateur du maximum de vraisemblance #EMV# qui souffre également d'un biais. Récemment, Dhaene et Johmans #2014# ont proposé la méthode d'estimation split-jackknife. Bien que ces estimateurs ont des approximations asymptotiques normales centrées sur le vrai paramètre, de sérieuses distorsions demeurent à échantillons finis. Dhaene et Johmans #2014# ont proposé le pairs bootstrap comme alternative dans ce contexte sans aucune justification théorique. Pour combler cette lacune, je montre que cette méthode est asymptotiquement valide lorsqu'elle est utilisée pour estimer la distribution de l'estimateur split-jackknife bien qu'incapable d'estimer la distribution de l'EMV. Des simulations Monte Carlo montrent que les intervalles de confiance bootstrap basés sur l'estimateur split-jackknife aident grandement à réduire les distorsions liées à l'approximation normale en échantillons finis. En outre, j'applique cette méthode bootstrap à un modèle de participation des femmes au marché du travail pour construire des intervalles de confiance valides. Dans le dernier chapitre #co-écrit avec Wenjie Wang#, nous étudions la validité asymptotique des procédures bootstrap pour les modèles à grands nombres de variables instrumentales #VI# dont un grand nombre peu être faible. Nous montrons analytiquement qu'un bootstrap standard basé sur les résidus et le bootstrap restreint et efficace #RE# de Davidson et MacKinnon #2008, 2010, 2014# ne peuvent pas estimer la distribution limite de l'estimateur du maximum de vraisemblance à information limitée #EMVIL#. La raison principale est qu'ils ne parviennent pas à bien imiter le paramètre qui caractérise l'intensité de l'identification dans l'échantillon. Par conséquent, nous proposons une méthode bootstrap modifiée qui estime de facon convergente cette distribution limite. Nos simulations montrent que la méthode bootstrap modifiée réduit considérablement les distorsions des tests asymptotiques de type Wald #$t$# dans les échantillons finis, en particulier lorsque le degré d'endogénéité est élevé.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays, Oceanographic and Geospatial communities are closely related worlds. The problem is that they follow parallel paths in data storage, distributions, modelling and data analyzing. This situation produces different data model implementations for the same features. While Geospatial information systems have 2 or 3 dimensions, the Oceanographic models uses multidimensional parameters like temperature, salinity, streams, ocean colour... This implies significant differences between data models of both communities, and leads to difficulties in dataset analysis for both sciences. These troubles affect directly to the Mediterranean Institute for Advanced Studies ( IMEDEA (CSIC-UIB)). Researchers from this Institute perform intensive processing with data from oceanographic facilities like CTDs, moorings, gliders… and geospatial data collected related to the integrated management of coastal zones. In this paper, we present an approach solution based on THREDDS (Thematic Real-time Environmental Distributed Data Services). THREDDS allows data access through the standard geospatial data protocol Web Coverage Service, inside the European project (European Coastal Sea Operational Observing and Forecasting system). The goal of ECOOP is to consolidate, integrate and further develop existing European coastal and regional seas operational observing and forecasting systems into an integrated pan- European system targeted at detecting environmental and climate changes

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este documento examina la hipótesis de sostenibilidad fiscal para 8 países de Latinoamérica. A partir de un modelo de datos panel, se determina si los ingresos y gasto primario de los Gobiernos entre 1960 - 2009 están cointegrados, es decir, si son sostenibles a largo plazo. Para esto, se utilizaron pruebas de raíz unitaria y cointegración de segunda generación con datos panel macroeconómicos, lo que permite tener en cuenta la dependencia cruzada entre los países, así como los posibles quiebres estructurales en la relación que estén determinados de manera endógena; en particular, se usan la prueba de estacionariedad de Hadri y Rao (2008) y la prueba de cointegración de Westerlund (2006). Como resultado del análisis se encontró evidencia empírica de que en el período bajo estudio el déficit primario en los 8 países latinoamericanos es sostenible pero en sentido débil.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La crisis que se desató en el mercado hipotecario en Estados Unidos en 2008 y que logró propagarse a lo largo de todo sistema financiero, dejó en evidencia el nivel de interconexión que actualmente existe entre las entidades del sector y sus relaciones con el sector productivo, dejando en evidencia la necesidad de identificar y caracterizar el riesgo sistémico inherente al sistema, para que de esta forma las entidades reguladoras busquen una estabilidad tanto individual, como del sistema en general. El presente documento muestra, a través de un modelo que combina el poder informativo de las redes y su adecuación a un modelo espacial auto regresivo (tipo panel), la importancia de incorporar al enfoque micro-prudencial (propuesto en Basilea II), una variable que capture el efecto de estar conectado con otras entidades, realizando así un análisis macro-prudencial (propuesto en Basilea III).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose and estimate a financial distress model that explicitly accounts for the interactions or spill-over effects between financial institutions, through the use of a spatial continuity matrix that is build from financial network data of inter bank transactions. Such setup of the financial distress model allows for the empirical validation of the importance of network externalities in determining financial distress, in addition to institution specific and macroeconomic covariates. The relevance of such specification is that it incorporates simultaneously micro-prudential factors (Basel 2) as well as macro-prudential and systemic factors (Basel 3) as determinants of financial distress. Results indicate network externalities are an important determinant of financial health of a financial institutions. The parameter that measures the effect of network externalities is both economically and statistical significant and its inclusion as a risk factor reduces the importance of the firm specific variables such as the size or degree of leverage of the financial institution. In addition we analyze the policy implications of the network factor model for capital requirements and deposit insurance pricing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La creciente importancia del uso de las aplicaciones SIG en las administraciones públicas, tanto españolas como europeas, ha dado lugar al surgimiento de diversos proyectos de desarrollo de software basados en licencias libres, cada uno de los cuales se dirige a un sector determinado de usuarios. Además, cada uno de estos proyectos define un modelo conceptual de datos para almacenar la información, servicios o módulos para el acceso a esa a información y funcionalidad que se le ofrece al usuario. La mayor parte de las veces estos proyectos se desarrollan de forma independiente a pesar de que existen interrelaciones claras entre todos ellos tales como compartir partes del modelo de datos, el interés común en dar soporte a aplicaciones de gestión municipal o el hecho de utilizar como base los mismos componentes. Estos motivos recomiendan buscar la confluencia entre los proyectos con el objetivo de evitar desarrollos duplicados y favorecer su integración e interoperabilidad. Por este motivo, en Enero de 2009 se constituyó la red signergias que busca mantener en contacto a los responsables de arquitectura y de desarrollo de estos proyectos con el fin de analizar las posibilidades de confluencia y de llegar a acuerdos que permitan compartir modelos de datos, definir de forma conjunta servicios y funcionalidades, o intercambiar componentes de software. En este artículo se describe la motivación de la creación de la red, sus objetivos, su forma de funcionamiento y los resultados alcanzados

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We review several asymmetrical links for binary regression models and present a unified approach for two skew-probit links proposed in the literature. Moreover, under skew-probit link, conditions for the existence of the ML estimators and the posterior distribution under improper priors are established. The framework proposed here considers two sets of latent variables which are helpful to implement the Bayesian MCMC approach. A simulation study to criteria for models comparison is conducted and two applications are made. Using different Bayesian criteria we show that, for these data sets, the skew-probit links are better than alternative links proposed in the literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article, we introduce a semi-parametric Bayesian approach based on Dirichlet process priors for the discrete calibration problem in binomial regression models. An interesting topic is the dosimetry problem related to the dose-response model. A hierarchical formulation is provided so that a Markov chain Monte Carlo approach is developed. The methodology is applied to simulated and real data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a two-step estimator for panel data models in which a binary covariate is endogenous. In the first stage, a random-effects probit model is estimated, having the endogenous variable as the left-hand side variable. Correction terms are then constructed and included in the main regression.