963 resultados para COUNT DATA MODELS
Resumo:
Dissertação (mestrado)—UnB/UFPB/UFRN, Programa MultiInstitucional e Inter-Regional de Pós-Graduação em Ciências Contábeis, 2016.
Resumo:
Obesity has been classified by the World Health Organization as a worldwide epidemic -- This issue is a growing field in economics due to pathologies associated with overweight, the significant impact on healthcare costs and consequent deterioration of welfare -- This paper shows the analysis of the results from the National Survey of Risk Factors in order to identify the role of socioeconomic conditions of obesity and overweight based on panel data models -- The results indicate that the income level and sedentary lifestyle have positively influenced obesity and overweight, whereas the education and medical coverage are not relevant when explaining the differences between provinces in overweight prevalence, but become significant in the obesity rates variations
Resumo:
No âmbito das obrigações que o Estado Português tem em garantir a segurança dos seus cidadãos, é efetuada, em países ou regiões onde há comunidades nacionais, uma avaliação quanto ao risco de vida para os cidadãos nacionais que aí residam ou aí se encontrem, entendendo-se, à luz do direito internacional consuetudinário, que é legítima a eventual execução de intervenção militar de extração de nacionais não combatentes dessas zonas de risco. Este trabalho pretende contribuir para uma reflexão sobre o apoio geoespacial a uma operação de extração de cidadãos nacionais não combatentes, que se denomina NEO (non-combatant evacuation operation). Dada a importância do conhecimento holístico do ambiente operacional para os comandantes militares, os Sistemas de Informação Geográfica desempenham um papel fundamental em termos da análise, contextualização e visualização da informação geoespacial, sendo um precioso sistema de apoio à decisão. A tomada de decisão é efetuada com os contributos de várias áreas de conhecimento, sendo fundamental que o planeamento seja efetuado com base na mesma informação geoespacial, evitando a existência de uma multitude de dados geoespaciais nem sempre coerentes, atualizados e acessíveis a todos os que deles necessitam, pretendendo-se com este trabalho fornecer um contributo para resolver este problema. Aborda-se também a escassez dos dados geográficos nas zonas em que este tipo de operações se poderá desenrolar, a pertinência e a adequabilidade de utilização de dados espaciais abertos, os modelos de dados, bem como a forma como a informação pode ser disponibilizada.
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm’s capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being. The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another. The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.
Resumo:
Predicting user behaviour enables user assistant services provide personalized services to the users. This requires a comprehensive user model that can be created by monitoring user interactions and activities. BaranC is a framework that performs user interface (UI) monitoring (and collects all associated context data), builds a user model, and supports services that make use of the user model. A prediction service, Next-App, is built to demonstrate the use of the framework and to evaluate the usefulness of such a prediction service. Next-App analyses a user's data, learns patterns, makes a model for a user, and finally predicts, based on the user model and current context, what application(s) the user is likely to want to use. The prediction is pro-active and dynamic, reflecting the current context, and is also dynamic in that it responds to changes in the user model, as might occur over time as a user's habits change. Initial evaluation of Next-App indicates a high-level of satisfaction with the service.
Resumo:
A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.
Resumo:
Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain fuzzy or under-specified. This is a pervasive problem in software engineering and artificial intelligence. Thus, we find semantic links that can have multiple interpretations in wide-coverage ontologies, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if relations are provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them. In this paper we present some insightful issues about the modeling, representation and usage of relations including the available taxonomy structuring methodologies as well as the initiatives aiming to provide relations with precise semantics. Moreover, we explain and propose the control of relations as a key issue for the coherent construction of ontologies.
Resumo:
Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.
Resumo:
Solar radiation data is crucial for the design of energy systems based on the solar resource. Since diffuse radiation measurements are not always available in the archive data series, either due to the inexistence of measuring equipment, shading device misplacement or missing data, models to generate these data are needed. In this work, one year of hourly and daily horizontal solar global and diffuse irradiation measurements in Évora are used to establish a new relation between the diffuse radiation and the clearness index. The proposed model includes a fitting parameter, which was adjusted through a simple optimization procedure to minimize the Least Square Error as compared to measurements. A comparison against several other fitting models presented in the literature was also carried out using the Root Mean Square Error as statistical indicator, and it was found that the present model is more accurate than the previous fitting models for the diffuse radiation data in Évora.
Resumo:
The first paper sheds light on the informational content of high frequency data and daily data. I assess the economic value of the two family models comparing their performance in forecasting asset volatility through the Value at Risk metric. In running the comparison this paper introduces two key assumptions: jumps in prices and leverage effect in volatility dynamics. Findings suggest that high frequency data models do not exhibit a superior performance over daily data models. In the second paper, building on Majewski et al. (2015), I propose an affine-discrete time model, labeled VARG-J, which is characterized by a multifactor volatility specification. In the VARG-J model volatility experiences periods of extreme movements through a jump factor modeled as an Autoregressive Gamma Zero process. The estimation under historical measure is done by quasi-maximum likelihood and the Extended Kalman Filter. This strategy allows to filter out both volatility factors introducing a measurement equation that relates the Realized Volatility to latent volatility. The risk premia parameters are calibrated using call options written on S&P500 Index. The results clearly illustrate the important contribution of the jump factor in the pricing performance of options and the economic significance of the volatility jump risk premia. In the third paper, I analyze whether there is empirical evidence of contagion at the bank level, measuring the direction and the size of contagion transmission between European markets. In order to understand and quantify the contagion transmission on banking market, I estimate the econometric model by Aït-Sahalia et al. (2015) in which contagion is defined as the within and between countries transmission of shocks and asset returns are directly modeled as a Hawkes jump diffusion process. The empirical analysis indicates that there is a clear evidence of contagion from Greece to European countries as well as self-contagion in all countries.
Resumo:
Urbanization has occasionally been linked to negative consequences. Traffic light system in urban arterial networks plays an essential role to the operation of transport systems. The availability of new Intelligent Transportation System innovations paved the way for connecting vehicles and road infrastructure. GLOSA, or the Green Light Optimal Speed Advisory, is a recent integration of vehicle-to-everything (v2x) technology. This thesis emphasized GLOSA system's potential as a tool for addressing traffic signal optimization. GLOSA serves as an advisory to drivers, informing them of the speed they must maintain to reduce waiting time. The considered study area in this thesis is the Via Aurelio Saffi – Via Emilia Ponente corridor in the Metropolitan City of Bologna which has several signalized intersections. Several simulation runs were performed in SUMOPy software on each peak-hour period (morning and afternoon) using recent actual traffic count data. GLOSA devices were placed on a 300m GLOSA distance. Considering the morning peak-hour, GLOSA outperformed the actuated traffic signal control, which is the baseline scenario, in terms of average waiting time, average speed, average fuel consumption per vehicle and average CO2 emissions. A remarkable 97% reduction on both fuel consumption and CO2 emissions were obtained. The average speed of vehicles running through the simulation was increased as well by 7% and a time saved of 25%. Same results were obtained for the afternoon peak hour with a decrease of 98% on both fuel consumption and CO2 emissions, 20% decrease on average waiting time, and an increase of 2% in average speed. In addition to previously mentioned benefits of GLOSA, a 15% and 13% decrease in time loss were obtained during morning and afternoon peak-hour, respectively. Towards the goal of sustainability, GLOSA shows a promising result of significantly lowering fuel consumption and CO2 emissions per vehicle.
Resumo:
Environmental data are spatial, temporal, and often come with many zeros. In this paper, we included space–time random effects in zero-inflated Poisson (ZIP) and ‘hurdle’ models to investigate haulout patterns of harbor seals on glacial ice. The data consisted of counts, for 18 dates on a lattice grid of samples, of harbor seals hauled out on glacial ice in Disenchantment Bay, near Yakutat, Alaska. A hurdle model is similar to a ZIP model except it does not mix zeros from the binary and count processes. Both models can be used for zero-inflated data, and we compared space–time ZIP and hurdle models in a Bayesian hierarchical model. Space–time ZIP and hurdle models were constructed by using spatial conditional autoregressive (CAR) models and temporal first-order autoregressive (AR(1)) models as random effects in ZIP and hurdle regression models. We created maps of smoothed predictions for harbor seal counts based on ice density, other covariates, and spatio-temporal random effects. For both models predictions around the edges appeared to be positively biased. The linex loss function is an asymmetric loss function that penalizes overprediction more than underprediction, and we used it to correct for prediction bias to get the best map for space–time ZIP and hurdle models.
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.
Resumo:
We consider a nontrivial one-species population dynamics model with finite and infinite carrying capacities. Time-dependent intrinsic and extrinsic growth rates are considered in these models. Through the model per capita growth rate we obtain a heuristic general procedure to generate scaling functions to collapse data into a simple linear behavior even if an extrinsic growth rate is included. With this data collapse, all the models studied become independent from the parameters and initial condition. Analytical solutions are found when time-dependent coefficients are considered. These solutions allow us to perceive nontrivial transitions between species extinction and survival and to calculate the transition's critical exponents. Considering an extrinsic growth rate as a cancer treatment, we show that the relevant quantity depends not only on the intensity of the treatment, but also on when the cancerous cell growth is maximum.