380 resultados para Dividend Imputation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the role of CEO power and government monitoring on bank dividend policy for a sample of 109 European listed banks for the period 2005-2013. We employ three main proxies for CEO power: CEO ownership, CEO tenure, and unforced CEO turnover. We show that CEO power has a negative impact on dividend payout ratios and on performance, suggesting that entrenched CEOs do not have the incentive to increase payout ratios to discourage monitoring from minority shareholders. Stronger internal monitoring by board of directors, as proxied by larger ownership stakes of the board members, increases performance but decreases payout ratios. These findings are contrary to those from the entrenchment literature for non-financial firms. Government ownership and the presence of a government official on the board of directors of the bank, also reduces payout ratios, in line with the view that government is incentivized to favor the interest of bank creditors before the interest of minority shareholders. These results show that government regulators are mainly concerned about bank safety and this allows powerful CEOs to distribute low payouts at the expense of minority shareholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We extend and complement prior work by investigating the earnings quality of firms with different financial health characteristics and growth prospects. By using three alternative measures of default likelihood and two alternative measures of growth options, without being limited to a specific event, we provide a more comprehensive setup for analysing the earnings characteristics of the universe of firms than examining distressed firms with persistent losses, dividend reductions or bankruptcy-filings. Our dataset consists of 15,049 healthy U.S. firms over the period 1990-2004. Results show that the relation between earnings quality and financial health is not monotonic. Distressed firms have a low level of earnings timeliness for bad news and a high level for good news, and manage earnings toward a positive target more frequently than healthy firms. On the other hand, healthy firms have a high level of earnings timeliness for bad news. Growth aspects play an important role in a firm's ability to manage earnings. In contrast to the findings of prior studies, growth firms have greater earnings timeliness for bad news, whereas value firms manage earnings toward a positive target more frequently than growth firms. © 2011 The Authors. Abacus© 2011 Accounting Foundation, The University of Sydney.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A pénzügy kutatócsoport a TÁMOP-4.2.1.B-09/1/KMR-2010-0005 azonosítójú projektjében igen szerteágazó elemzési munkát végzett. Rámutattunk, hogy a különböző szintű gazdasági szereplők megnövekedett tőkeáttétele egyértelműen a rendszerkockázat növekedéséhez vezet, hiszen nő az egyes szereplők csődjének valószínűsége. Ha a tőkeáttételt eltérő mértékben és ütemben korlátozzák az egyes szektorokban, országokban akkor a korlátozást később bevezető szereplők egyértelműen versenyelőnyhöz jutnak. Az egyes pénzügyi intézmények tőkeallokációját vizsgálva kimutattuk, hogy a különféle divíziók közt mindig lehetséges a működés fedezetésül szolgáló tőkét (kockázatot) úgy felosztani, hogy a megállapodás felmondás egyik érintettnek se álljon érdekében. Ezt azonban nem lehet minden szempontból igazságosan megtenni, így egyes üzletágak versenyhátrányba kerülhetnek, ha a konkurens piaci szereplők az adott tevékenységet kevésbé igazságtalanul terhelték meg. Kimutattunk, hogy az egyes nyugdíjpénztárak befektetési tevékenységének eredményességére nagy hatással van a magánnyugdíjpénztárak tevékenységének szabályozása. Ezek a jogszabályok a társadalom hosszú távú versenyképességére vannak hatással. Rámutattunk arra is, hogy a gazdasági válság előtt a hazai bankok sem voltak képesek ügyfeleik kockázatviselő képességét helyesen megítélni, ráadásul jutalékrendszerük nem is tette ebben érdekelté azokat. Számos vizsgálatunk foglalkozott a magyar vállalatok versenyképességének alakulásával is. Megvizsgáltuk a különféle adónemek, árfolyamkockázatok és finanszírozási politikák versenyképességet befolyásoló hatását. Külön kutatás vizsgálta a kamatlábak ingadozásának és az hitelekhez kapcsolódó eszközfedezet meglétének vállalati értékre gyakorolt hatásait. Rámutattunk a nemfizetés növekvő kockázatára, és áttekintettük a lehetséges és a ténylegesen alkalmazott kezelési stratégiákat is. Megvizsgáltuk azt is, hogy a tőzsdei cégek tulajdonosai miként használják ki az osztalékfizetéshez kapcsolódó adóoptimalizálási lehetőségeket. Gyakorlati piaci tapasztalataik alapján az adóelkerülő kereskedést a befektetők a részvények egy jelentős részénél végrehajtják. Külön kutatás foglakozott a szellemi tőke hazai vállalatoknál játszott szerepéről. Ez alapján a cégek a problémát 2009-ben lényegesen magasabb szakértelemmel kezelték, mint öt esztendővel korábban. Rámutattunk arra is, hogy a tulajdonosi háttér lényeges hatást gyakorolhat arra, ahogyan a cégek célrendszerüket felépítik, illetve ahogy az intellektuális javakra tekintenek. _____ The Finance research team has covered a wide range of research fields while taking part at project TÁMOP-4.2.1.B-09/1/KMR-2010-0005. It has been shown that the increasing financial gearing at the different economic actors clearly leads to growth in systematic risk as the probability of bankruptcy climbs upwards. Once the leverage is limited at different levels and at different points in time for the different sectors, countries introducing the limitations later gain clearly a competitive advantage. When investigating the leverage at financial institutions we found that the capital requirement of the operation can always be divided among divisions so that none of them would be better of with cancelling the cooperation. But this cannot be always done fairly from all point of view meaning some of the divisions may face a competitive disadvantage if competitors charge their similar division less unfairly. Research has also shown that the regulation of private pension funds has vital effect on the profitability of the investment activity of the funds. These laws and regulations do not only affect the funds themselves but also the competitiveness of the whole society. We have also fund that Hungarian banks were unable to estimate correctly the risk taking ability of their clients before the economic crisis. On the top of that the bank were not even interested in that due to their commission based income model. We also carried out several research on the competitiveness of the Hungarian firms. The effect of taxes, currency rate risks, and financing policies on competitiveness has been analysed in detail. A separate research project was dedicated to the effect of the interest rate volatility and asset collaterals linked to debts on the value of the firm. The increasing risk of non-payment has been underlined and we also reviewed the adequate management strategies potentially available and used in real life. We also investigated how the shareholders of listed companies use the tax optimising possibilities linked to dividend payments. Based on our findings on the Hungarian markets the owners perform the tax evading trades in case of the most shares. A separate research has been carried out on the role played by intellectual capital. After that the Hungarian companies dealt with the problem in 2009 with far higher proficiency than five years earlier. We also pointed out that the ownership structure has a considerable influence on how firms structure their aims and view their intangible assets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. ^ This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last two decades social vulnerability has emerged as a major area of study, with increasing attention to the study of vulnerable populations. Generally, the elderly are among the most vulnerable members of any society, and widespread population aging has led to greater focus on elderly vulnerability. However, the absence of a valid and practical measure constrains the ability of policy-makers to address this issue in a comprehensive way. This study developed a composite indicator, The Elderly Social Vulnerability Index (ESVI), and used it to undertake a comparative analysis of the availability of support for elderly Jamaicans based on their access to human, material and social resources. The results of the ESVI indicated that while the elderly are more vulnerable overall, certain segments of the population appear to be at greater risk. Females had consistently lower scores than males, and the oldest-old had the highest scores of all groups of older persons. Vulnerability scores also varied according to place of residence, with more rural parishes having higher scores than their urban counterparts. These findings support the political economy framework which locates disadvantage in old age within political and ideological structures. The findings also point to the pervasiveness and persistence of gender inequality as argued by feminist theories of aging. Based on the results of the study it is clear that there is a need for policies that target specific population segments, in addition to universal policies that could make the experience of old age less challenging for the majority of older persons. Overall, the ESVI has displayed usefulness as a tool for theoretical analysis and demonstrated its potential as a policy instrument to assist decision-makers in determining where to target their efforts as they seek to address the issue of social vulnerability in old age. Data for this study came from the 2001 population and housing census of Jamaica, with multiple imputation for missing data. The index was derived from the linear aggregation of three equally weighted domains, comprised of eleven unweighted indicators which were normalized using z-scores. Indicators were selected based on theoretical relevance and data availability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The number of dividend paying firms has been on the decline since the popularity of stock repurchases in the 1980s, and the recent financial crisis has brought about a wave of dividend reductions and omissions. This dissertation examined the U.S. firms and American Depository Receipts that are listed on the U.S. equity exchanges according to their dividend paying history in the previous twelve quarters. While accounting for the state of the economy, the firm’s size, profitability, earned equity, and growth opportunities, it determines whether or not the firm will pay a dividend in the next quarter. It also examined the likelihood of a dividend change. Further, returns of firms were examined according to their dividend paying history and the state of the economy using the Fama-French three-factor model. Using forward, backward, and step-wise selection logistic regressions, the results show that firms with a history of regular and uninterrupted dividend payments are likely to continue to pay dividends, while firms that do not have a history of regular dividend payments are not likely to begin to pay dividends or continue to do so. The results of a set of generalized polytomous logistic regressions imply that dividend paying firms are more likely to reduce dividend payments during economic expansions, as opposed to recessions. Also the analysis of returns using the Fama-French three factor model reveals that dividend paying firms are earning significant abnormal positive returns. As a special case, a similar analysis of dividend payment and dividend change was applied to American Depository Receipts that trade on the NYSE, NASDAQ, and AMEX exchanges and are issued by the Bank of New York Mellon. Returns of American Depository Receipts were examined using the Fama-French two-factor model for international firms. The results of the generalized polytomous logistic regression analyses indicate that dividend paying status and economic conditions are also important for dividend level change of American Depository Receipts, and Fama-French two-factor regressions alone do not adequately explain returns for these securities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In response to a crime epidemic afflicting Latin America since the early 1990s, several countries in the region have resorted to using heavy-force police or military units to physically retake territories de facto controlled by non-State criminal or insurgent groups. After a period of territory control, the heavy forces hand law enforcement functions in the retaken territories to regular police officers, with the hope that the territories and their populations will remain under the control of the state. To a varying degree, intensity, and consistency, Brazil, Colombia, Mexico, and Jamaica have adopted such policies since the mid-1990s. During such operations, governments need to pursue two interrelated objectives: to better establish the state’s physical presence and to realign the allegiance of the population in those areas toward the state and away from the non-State criminal entities. From the perspective of law enforcement, such operations entail several critical decisions and junctions, such as: Whether or not to announce the force insertion in advance. The decision trades off the element of surprise and the ability to capture key leaders of the criminal organizations against the ability to minimize civilian casualties and force levels. The latter, however, may allow criminals to go to ground and escape capture. Governments thus must decide whether they merely seek to displace criminal groups to other areas or maximize their decapitation capacity. Intelligence flows rarely come from the population. Often, rival criminal groups are the best source of intelligence. However, cooperation between the State and such groups that goes beyond using vetted intelligence provided by the groups, such as a State tolerance for militias, compromises the rule-of-law integrity of the State and ultimately can eviscerate even public safety gains. Sustaining security after initial clearing operations is at times even more challenging than conducting the initial operations. Although unlike the heavy forces, traditional police forces, especially if designed as community police, have the capacity to develop trust of the community and ultimately focus on crime prevention, developing such trust often takes a long time. To develop the community’s trust, regular police forces need to conduct frequent on-foot patrols with intensive nonthreatening interactions with the population and minimize the use of force. Moreover, sufficiently robust patrol units need to be placed in designated beats for substantial amount of time, often at least over a year. Establishing oversight mechanisms, including joint police-citizens’ boards, further facilities building trust in the police among the community. After disruption of the established criminal order, street crime often significantly rises and both the heavy-force and community-police units often struggle to contain it. The increase in street crime alienates the population of the retaken territory from the State. Thus developing a capacity to address street crime is critical. Moreover, the community police units tend to be vulnerable (especially initially) to efforts by displaced criminals to reoccupy the cleared territories. Losing a cleared territory back to criminal groups is extremely costly in terms of losing any established trust and being able to recover it. Rather than operating on a priori determined handover schedule, a careful assessment of the relative strength of regular police and criminal groups post-clearing operations is likely to be a better guide for timing the handover from heavy forces to regular police units. Cleared territories often experience not only a peace dividend, but also a peace deficit – in the rise new serious crime (in addition to street crime). Newly – valuable land and other previously-inaccessible resources can lead to land speculation and forced displacement; various other forms of new crime can also significantly rise. Community police forces often struggle to cope with such crime, especially as it is frequently linked to legal business. Such new crime often receives little to no attention in the design of the operations to retake territories from criminal groups. But without developing an effective response to such new crime, the public safety gains of the clearing operations can be altogether lost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates materialization strategies of non-assumption of enunciation responsibility and inscription of an authorial voice in scientific articles produced by initial researchers in Linguistics. The specific focus lays on identify, describe and interpret: i) linguistics marks that assign enunciation responsibility; ii) the positions taken by the first speaker-enunciator (L1/E1) in relation to points of view (PoV) imputed to second enunciators (e2); and iii) the linguistic marks that assign the formulation of themselves' PoV. As a practical deployment, it is proposed to discuss how to teach taking into account text discursive strategies regarding to enunciation responsibility and also authorship in academic and scientific texts. Our research corpus is formed by eight scientific essays and they were selected in a renamed Linguistics scientific magazine which is high evaluated by Qualis/CAPES (Brazil Science Agency). The methodology follows the assumptions of a qualitative research, and an it has such an interpretative basis, even though it takes support in a quantitative approach, too. Theoretically, we based this research on Textual Analysis of Speech and linguistics theories about linguistic enunciation area. The results show two kinds of movements in PoV management: imputation and responsibility. In imputation contexts, the most recursive linguistic marks were reported speech, indirect speech, reported speech with “that”, modalization in reported speech (in enunciation with “according to”, “in agreement with”, “for”), beyond that we see certain points of non-coincidences of speech, specifically the non-coincidence of the speech itself. The way those linguistic marks occur in the text point out three kinds of enunciation positions that are assumed by L1/E1 in relation to PoV of e2: agreement, disagreement and a pseudo neutrality. It was clearly recursive the imputation followed by agreement (explicit or not), this perspective puts other’s voices to defend a speech assumed like own authorship. In speech responsibility contexts, we observed such a formulation of inner PoV that results from theoretical findings undertaken by novice researchers (revealing how he/she interpreted concepts of the theory) or arising from their research data, allowing them to express with more autonomy and without reporting to speeches from e2. Based on those data, we can say that, in text by initial researchers, the authorship is strongly built upon PoV and also dependent from others' words (theory and the scholars quoted there), taking into account that many contexts in which we can observe agreement position, PoV formulations with words taken from e2 and assumed as own words by syntactic integration, the comments about what the other says, the absence of explanations and additions, as well as a data analysis that could show agreement with the theory used to support the work. These results allow us to visualize how initial researcher dialogs with the theoretical enunciation sources he or she takes as support and how he/she displays the status of a subject doing a research and positioning himself/herself as a researcher/author in the scientific field. In assuming the reported speech, when quoting, as a resource that allows the enunciation responsibility and also when doing evidence to the positions of speaker-enunciator in relation do reported PoV, this suggests to a textual-discursive treatment of quoting in academic and scientific text, in a context of teaching that gives attention to the development of communication skills of initial researcher and that can contribute to insert and interact students in the scientific field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates materialization strategies of non-assumption of enunciation responsibility and inscription of an authorial voice in scientific articles produced by initial researchers in Linguistics. The specific focus lays on identify, describe and interpret: i) linguistics marks that assign enunciation responsibility; ii) the positions taken by the first speaker-enunciator (L1/E1) in relation to points of view (PoV) imputed to second enunciators (e2); and iii) the linguistic marks that assign the formulation of themselves' PoV. As a practical deployment, it is proposed to discuss how to teach taking into account text discursive strategies regarding to enunciation responsibility and also authorship in academic and scientific texts. Our research corpus is formed by eight scientific essays and they were selected in a renamed Linguistics scientific magazine which is high evaluated by Qualis/CAPES (Brazil Science Agency). The methodology follows the assumptions of a qualitative research, and an it has such an interpretative basis, even though it takes support in a quantitative approach, too. Theoretically, we based this research on Textual Analysis of Speech and linguistics theories about linguistic enunciation area. The results show two kinds of movements in PoV management: imputation and responsibility. In imputation contexts, the most recursive linguistic marks were reported speech, indirect speech, reported speech with “that”, modalization in reported speech (in enunciation with “according to”, “in agreement with”, “for”), beyond that we see certain points of non-coincidences of speech, specifically the non-coincidence of the speech itself. The way those linguistic marks occur in the text point out three kinds of enunciation positions that are assumed by L1/E1 in relation to PoV of e2: agreement, disagreement and a pseudo neutrality. It was clearly recursive the imputation followed by agreement (explicit or not), this perspective puts other’s voices to defend a speech assumed like own authorship. In speech responsibility contexts, we observed such a formulation of inner PoV that results from theoretical findings undertaken by novice researchers (revealing how he/she interpreted concepts of the theory) or arising from their research data, allowing them to express with more autonomy and without reporting to speeches from e2. Based on those data, we can say that, in text by initial researchers, the authorship is strongly built upon PoV and also dependent from others' words (theory and the scholars quoted there), taking into account that many contexts in which we can observe agreement position, PoV formulations with words taken from e2 and assumed as own words by syntactic integration, the comments about what the other says, the absence of explanations and additions, as well as a data analysis that could show agreement with the theory used to support the work. These results allow us to visualize how initial researcher dialogs with the theoretical enunciation sources he or she takes as support and how he/she displays the status of a subject doing a research and positioning himself/herself as a researcher/author in the scientific field. In assuming the reported speech, when quoting, as a resource that allows the enunciation responsibility and also when doing evidence to the positions of speaker-enunciator in relation do reported PoV, this suggests to a textual-discursive treatment of quoting in academic and scientific text, in a context of teaching that gives attention to the development of communication skills of initial researcher and that can contribute to insert and interact students in the scientific field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides a method for constructing a new historical global nitrogen fertilizer application map (0.5° × 0.5° resolution) for the period 1961-2010 based on country-specific information from Food and Agriculture Organization statistics (FAOSTAT) and various global datasets. This new map incorporates the fraction of NH+4 (and NONO-3) in N fertilizer inputs by utilizing fertilizer species information in FAOSTAT, in which species can be categorized as NH+4 and/or NO-3-forming N fertilizers. During data processing, we applied a statistical data imputation method for the missing data (19 % of national N fertilizer consumption) in FAOSTAT. The multiple imputation method enabled us to fill gaps in the time-series data using plausible values using covariates information (year, population, GDP, and crop area). After the imputation, we downscaled the national consumption data to a gridded cropland map. Also, we applied the multiple imputation method to the available chemical fertilizer species consumption, allowing for the estimation of the NH+4/NO-3 ratio in national fertilizer consumption. In this study, the synthetic N fertilizer inputs in 2000 showed a general consistency with the existing N fertilizer map (Potter et al., 2010, doi:10.1175/2009EI288.1) in relation to the ranges of N fertilizer inputs. Globally, the estimated N fertilizer inputs based on the sum of filled data increased from 15 Tg-N to 110 Tg-N during 1961-2010. On the other hand, the global NO-3 input started to decline after the late 1980s and the fraction of NO-3 in global N fertilizer decreased consistently from 35 % to 13 % over a 50-year period. NH+4 based fertilizers are dominant in most countries; however, the NH+4/NO-3 ratio in N fertilizer inputs shows clear differences temporally and geographically. This new map can be utilized as an input data to global model studies and bring new insights for the assessment of historical terrestrial N cycling changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previously developed models for predicting absolute risk of invasive epithelial ovarian cancer have included a limited number of risk factors and have had low discriminatory power (area under the receiver operating characteristic curve (AUC) < 0.60). Because of this, we developed and internally validated a relative risk prediction model that incorporates 17 established epidemiologic risk factors and 17 genome-wide significant single nucleotide polymorphisms (SNPs) using data from 11 case-control studies in the United States (5,793 cases; 9,512 controls) from the Ovarian Cancer Association Consortium (data accrued from 1992 to 2010). We developed a hierarchical logistic regression model for predicting case-control status that included imputation of missing data. We randomly divided the data into an 80% training sample and used the remaining 20% for model evaluation. The AUC for the full model was 0.664. A reduced model without SNPs performed similarly (AUC = 0.649). Both models performed better than a baseline model that included age and study site only (AUC = 0.563). The best predictive power was obtained in the full model among women younger than 50 years of age (AUC = 0.714); however, the addition of SNPs increased the AUC the most for women older than 50 years of age (AUC = 0.638 vs. 0.616). Adapting this improved model to estimate absolute risk and evaluating it in prospective data sets is warranted.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire de maîtrise traite de la théorie de la ruine, et plus spécialement des modèles actuariels avec surplus dans lesquels sont versés des dividendes. Nous étudions en détail un modèle appelé modèle gamma-omega, qui permet de jouer sur les moments de paiement de dividendes ainsi que sur une ruine non-standard de la compagnie. Plusieurs extensions de la littérature sont faites, motivées par des considérations liées à la solvabilité. La première consiste à adapter des résultats d’un article de 2011 à un nouveau modèle modifié grâce à l’ajout d’une contrainte de solvabilité. La seconde, plus conséquente, consiste à démontrer l’optimalité d’une stratégie de barrière pour le paiement des dividendes dans le modèle gamma-omega. La troisième concerne l’adaptation d’un théorème de 2003 sur l’optimalité des barrières en cas de contrainte de solvabilité, qui n’était pas démontré dans le cas des dividendes périodiques. Nous donnons aussi les résultats analogues à l’article de 2011 en cas de barrière sous la contrainte de solvabilité. Enfin, la dernière concerne deux différentes approches à adopter en cas de passage sous le seuil de ruine. Une liquidation forcée du surplus est mise en place dans un premier cas, en parallèle d’une liquidation à la première opportunité en cas de mauvaises prévisions de dividendes. Un processus d’injection de capital est expérimenté dans le deuxième cas. Nous étudions l’impact de ces solutions sur le montant des dividendes espérés. Des illustrations numériques sont proposées pour chaque section, lorsque cela s’avère pertinent.