839 resultados para rainfall-runoff empirical statistical model
Resumo:
Information systems have developed to the stage that there is plenty of data available in most organisations but there are still major problems in turning that data into information for management decision making. This thesis argues that the link between decision support information and transaction processing data should be through a common object model which reflects the real world of the organisation and encompasses the artefacts of the information system. The CORD (Collections, Objects, Roles and Domains) model is developed which is richer in appropriate modelling abstractions than current Object Models. A flexible Object Prototyping tool based on a Semantic Data Storage Manager has been developed which enables a variety of models to be stored and experimented with. A statistical summary table model COST (Collections of Objects Statistical Table) has been developed within CORD and is shown to be adequate to meet the modelling needs of Decision Support and Executive Information Systems. The COST model is supported by a statistical table creator and editor COSTed which is also built on top of the Object Prototyper and uses the CORD model to manage its metadata.
Resumo:
This thesis examines the effect of rights issue announcements on stock prices by companies listed on the Kuala Lumpur Stock Exchange (KLSE) between 1987 to 1996. The emphasis is to report whether the KLSE is semi strongly efficient with respect to the announcement of rights issues and to check whether the implications of corporate finance theories on the effect of an event can be supported in the context of an emerging market. Once the effect is established, potential determinants of abnormal returns identified by previous empirical work and corporate financial theory are analysed. By examining 70 companies making clean rights issue announcements, this thesis will hopefully shed light on some important issues in long term corporate financing. Event study analysis is used to check on the efficiency of the Malaysian stock market; while cross-sectional regression analysis is executed to identify possible explanators of the rights issue announcements' effect. To ensure the results presented are not contaminated, econometric and statistical issues raised in both analyses have been taken into account. Given the small amount of empirical research conducted in this part of the world, the results of this study will hopefully be of use to investors, security analysts, corporate financial managements, regulators and policy makers as well as those who are interested in capital market based research of an emerging market. It is found that the Malaysian stock market is not semi strongly efficient since there exists a persistent non-zero abnormal return. This finding is not consistent with the hypothesis that security returns adjust rapidly to reflect new information. It may be possible that the result is influenced by the sample, consisting mainly of below average size companies which tend to be thinly traded. Nevertheless, these issues have been addressed. Another important issue which has emerged from the study is that there is some evidence to suggest that insider trading activity existed in this market. In addition to these findings, when the rights issue announcements' effect is compared to the implications of corporate finance theories in predicting the sign of abnormal returns, the signalling model, asymmetric information model, perfect substitution hypothesis and Scholes' information hypothesis cannot be supported.
Resumo:
The research concerned the assessment of the pathways utilized by heavy metal pollutants in urban stormwater runoff. A separately sewered urban residential catchment of approximately 107 hectares in Chelmsley Wood, north-east Birmingham was the subject of the field investigation. The catchment area, almost entirely residential, had no immediate industrial heavy metal input, however, industry was situated to the north of the catchment. The perimeter of the catchment was bounded by the M6 motorway on the northern and eastern sides and was believed to contribute to aerial deposition. Metal inputs to the ground surface were assumed to be confined to normal suburban activities, namely, aerial deposition, vehicular activity and anthropological activities. A programme of field work was undertaken over a 12 month period, from July 1983 to July 1984. Monthly deposition rates were monitored using a network of deposit cannisters and roadside sediment and soil samples were taken. Stormwater samples were obtained for 19 separate events. All samples were analysed for iron, lead, zinc, copper, chromium, nickel and cadmium content. Rainfall was recorded on site with additional meteorological data obtained from local Meteorological Offices. Use was made of a simple conceptual model designed for the catchment to substantiate hypotheses derived from site investigations and literature, to investigate the pathways utilized for the transportation of heavy metals throughout the catchment.
Resumo:
This paper develops and tests a learning organization model derived from HRM and dynamic capability literatures in order to ascertain the model's applicability across divergent global contexts. We define a learning organization as one capable of achieving on-going strategic renewal, arguing based on dynamic capability theory that the model has three necessary antecedents: HRM focus, developmental orientation and customer-facing remit. Drawing on a sample comprising nearly 6000 organizations across 15 countries, we show that learning organizations exhibit higher performance than their less learning-inclined counterparts. We also demonstrate that innovation fully mediates the relationship between our conceptualization of the learning organization and organizational performance in 11 of the 15 countries we examined. It is the first time in our knowledge that these questions have been tested in a major, cross-global study, and our work contributes to both HRM and dynamic capability literatures, especially where the focus is the applicability of best practice parameters across national boundaries.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The financial community is well aware that continued underfunding of state and local government pension plans poses many public policy and fiduciary management concerns. However, a well-defined theoretical rationale has not been developed to explain why and how public sector pension plans underfund. This study uses three methods: a survey of national pension experts, an incomplete covariance panel method, and field interviews.^ A survey of national public sector pension experts was conducted to provide a conceptual framework by which underfunding could be evaluated. Experts suggest that plan design, fiscal stress, and political culture factors impact underfunding. However, experts do not agree with previous research findings that unions actively pursue underfunding to secure current wage increases.^ Within the conceptual framework and determinants identified by experts, several empirical regularities are documented for the first time. Analysis of 173 local government pension plans, observed from 1987 to 1992, was conducted. Findings indicate that underfunding occurs in plans that have lower retirement ages, increased costs due to benefit enhancements, when the sponsor faces current year operating deficits, or when a local government relies heavily on inelastic revenue sources. Results also suggest that elected officials artificially inflate interest rate assumptions to reduce current pension costs, consequently shifting these costs to future generations. In concurrence with some experts there is no data to support the assumption that highly unionized employees secure more funding than less unionized employees.^ Empirical results provide satisfactory but not overwhelming statistical power, and only minor predictive capacity. To further explore why underfunding occurs, field interviews were carried out with 62 local government officials. Practitioners indicated that perceived fiscal stress, the willingness of policymakers to advance funding, bargaining strategies used by union officials, apathy by employees and retirees, pension board composition, and the level of influence by internal pension experts has an impact on funding outcomes.^ A pension funding process model was posited by triangulating the expert survey, empirical findings, and field survey results. The funding process model should help shape and refine our theoretical knowledge of state and local government pension underfunding in the future. ^
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
This study examined Kirkpatrick’s training evaluation model (Kirkpatrick & Kirkpatrick, 2006) by assessing a sales training program conducted at an organization in the hospitality industry. The study assessed the employees’ training outcomes of knowledge and skills, job performance, and the impact of the training upon the organization. By assessing these training outcomes and their relationships, the study demonstrated whether Kirkpatrick’s theories are supported and the lower evaluation levels can be used to predict organizational impact. The population for this study was a group of reservations sales agents from a leading luxury hotel chain’s reservations center. During the study period from January 2005 to May 2007, there were 335 reservations sales agents employed in this Global Reservations Center (GRC). The number of reservations sales agents who had completed a sales training program/intervention during this period and had data available for at least two months pre and post training composed the sample for this study. The number of agents was 69 ( N = 69). Four hypotheses were tested through paired-samples t tests, correlation, and hierarchical regression analytic procedures. Results from the analyses supported the hypotheses in this study. The significant improvement in the call score supported hypothesis one that the reservations sales agents who completed the training improved their knowledge of content and required skills in handling calls (Level 2). Hypothesis two was accepted in part as there was significant improvement in call conversion, but there was no significant improvement of time usage. The significant improvement in the sales per call supported hypothesis three that the reservations agents who completed the training contributed to increased organizational impact (Level 4), i.e., made significantly more sales. Last, findings supported hypothesis four that Level 2 and Level 3 variables can be used for predicting Level 4 organizational impact. The findings supported the theory of Kirkpatrick’s evaluation model that in order to expect organizational results, a positive change in behavior (job performance) and learning must occur. The examinations of Levels 2 and 3 helped to partially explain and predict Level 4 results.
Resumo:
Major portion of hurricane-induced economic loss originates from damages to building structures. The damages on building structures are typically grouped into three main categories: exterior, interior, and contents damage. Although the latter two types of damages, in most cases, cause more than 50% of the total loss, little has been done to investigate the physical damage process and unveil the interdependence of interior damage parameters. Building interior and contents damages are mainly due to wind-driven rain (WDR) intrusion through building envelope defects, breaches, and other functional openings. The limitation of research works and subsequent knowledge gaps, are in most part due to the complexity of damage phenomena during hurricanes and lack of established measurement methodologies to quantify rainwater intrusion. This dissertation focuses on devising methodologies for large-scale experimental simulation of tropical cyclone WDR and measurements of rainwater intrusion to acquire benchmark test-based data for the development of hurricane-induced building interior and contents damage model. Target WDR parameters derived from tropical cyclone rainfall data were used to simulate the WDR characteristics at the Wall of Wind (WOW) facility. The proposed WDR simulation methodology presents detailed procedures for selection of type and number of nozzles formulated based on tropical cyclone WDR study. The simulated WDR was later used to experimentally investigate the mechanisms of rainwater deposition/intrusion in buildings. Test-based dataset of two rainwater intrusion parameters that quantify the distribution of direct impinging raindrops and surface runoff rainwater over building surface — rain admittance factor (RAF) and surface runoff coefficient (SRC), respectively —were developed using common shapes of low-rise buildings. The dataset was applied to a newly formulated WDR estimation model to predict the volume of rainwater ingress through envelope openings such as wall and roof deck breaches and window sill cracks. The validation of the new model using experimental data indicated reasonable estimation of rainwater ingress through envelope defects and breaches during tropical cyclones. The WDR estimation model and experimental dataset of WDR parameters developed in this dissertation work can be used to enhance the prediction capabilities of existing interior damage models such as the Florida Public Hurricane Loss Model (FPHLM).^
Resumo:
The transducer function mu for contrast perception describes the nonlinear mapping of stimulus contrast onto an internal response. Under a signal detection theory approach, the transducer model of contrast perception states that the internal response elicited by a stimulus of contrast c is a random variable with mean mu(c). Using this approach, we derive the formal relations between the transducer function, the threshold-versus-contrast (TvC) function, and the psychometric functions for contrast detection and discrimination in 2AFC tasks. We show that the mathematical form of the TvC function is determined only by mu, and that the psychometric functions for detection and discrimination have a common mathematical form with common parameters emanating from, and only from, the transducer function mu and the form of the distribution of the internal responses. We discuss the theoretical and practical implications of these relations, which have bearings on the tenability of certain mathematical forms for the psychometric function and on the suitability of empirical approaches to model validation. We also present the results of a comprehensive test of these relations using two alternative forms of the transducer model: a three-parameter version that renders logistic psychometric functions and a five-parameter version using Foley's variant of the Naka-Rushton equation as transducer function. Our results support the validity of the formal relations implied by the general transducer model, and the two versions that were contrasted account for our data equally well.
Resumo:
We thank the European Research Council ERC (project GA 335910 VEWA) for funding the VeWa project.
Resumo:
The recently proposed global monsoon hypothesis interprets monsoon systems as part of one global-scale atmospheric overturning circulation, implying a connection between the regional monsoon systems and an in-phase behaviour of all northern hemispheric monsoons on annual timescales (Trenberth et al., 2000). Whether this concept can be applied to past climates and variability on longer timescales is still under debate, because the monsoon systems exhibit different regional characteristics such as different seasonality (i.e. onset, peak, and withdrawal). To investigate the interconnection of different monsoon systems during the pre-industrial Holocene, five transient global climate model simulations have been analysed with respect to the rainfall trend and variability in different sub-domains of the Afro-Asian monsoon region. Our analysis suggests that on millennial timescales with varying orbital forcing, the monsoons do not behave as a tightly connected global system. According to the models, the Indian and North African monsoons are coupled, showing similar rainfall trend and moderate correlation in rainfall variability in all models. The East Asian monsoon changes independently during the Holocene. The dissimilarities in the seasonality of the monsoon sub-systems lead to a stronger response of the North African and Indian monsoon systems to the Holocene insolation forcing than of the East Asian monsoon and affect the seasonal distribution of Holocene rainfall variations. Within the Indian and North African monsoon domain, precipitation solely changes during the summer months, showing a decreasing Holocene precipitation trend. In the East Asian monsoon region, the precipitation signal is determined by an increasing precipitation trend during spring and a decreasing precipitation change during summer, partly balancing each other. A synthesis of reconstructions and the model results do not reveal an impact of the different seasonality on the timing of the Holocene rainfall optimum in the different sub-monsoon systems. They rather indicate locally inhomogeneous rainfall changes and show, that single palaeo-records should not be used to characterise the rainfall change and monsoon evolution for entire monsoon sub-systems.
Resumo:
Shape-based registration methods frequently encounters in the domains of computer vision, image processing and medical imaging. The registration problem is to find an optimal transformation/mapping between sets of rigid or nonrigid objects and to automatically solve for correspondences. In this paper we present a comparison of two different probabilistic methods, the entropy and the growing neural gas network (GNG), as general feature-based registration algorithms. Using entropy shape modelling is performed by connecting the point sets with the highest probability of curvature information, while with GNG the points sets are connected using nearest-neighbour relationships derived from competitive hebbian learning. In order to compare performances we use different levels of shape deformation starting with a simple shape 2D MRI brain ventricles and moving to more complicated shapes like hands. Results both quantitatively and qualitatively are given for both sets.