819 resultados para ESTIMATOR
Resumo:
2010 Mathematics Subject Classification: 62F10, 62F12.
Resumo:
2000 Mathematics Subject Classification: 62G07, 62L20.
Resumo:
The focus of this study is on the governance decisions in a concurrent channels context, in the case of uncertainty. The study examines how a firm chooses to deploy its sales force in times of uncertainty, and the subsequent performance outcome of those deployment choices. The theoretical framework is based on multiple theories of governance, including transaction cost analysis (TCA), agency theory, and institutional economics. Three uncertainty variables are investigated in this study. The first two are demand and competitive uncertainty which are considered to be industry-level market uncertainty forms. The third uncertainty, political uncertainty, is chosen as it is an important dimension of institutional environments, capturing non-economic circumstances such as regulations and political systemic issues. The study employs longitudinal secondary data from a Thai hotel chain, comprising monthly observations from January 2007 – December 2012. This hotel chain has its operations in 4 countries, Thailand, the Philippines, United Arab Emirates – Dubai, and Egypt, all of which experienced substantial demand, competitive, and political uncertainty during the study period. This makes them ideal contexts for this study. Two econometric models, both deploying Newey-West estimations, are employed to test 13 hypotheses. The first model considers the relationship between uncertainty and governance. The second model is a version of Newey-West, using an Instrumental Variables (IV) estimator and a Two-Stage Least Squares model (2SLS), to test the direct effect of uncertainty on performance and the moderating effect of governance on the relationship between uncertainty and performance. The observed relationship between uncertainty and governance observed follows a core prediction of TCA; that vertical integration is the preferred choice of governance when uncertainty rises. As for the subsequent performance outcomes, the results corroborate that uncertainty has a negative effect on performance. Importantly, the findings show that becoming more vertically integrated cannot help moderate the effect of demand and competitive uncertainty, but can significantly moderate the effect of political uncertainty. These findings have significant theoretical and practical implications, and extend our knowledge of the impact on uncertainty significantly, as well as bringing an institutional perspective to TCA. Further, they offer managers novel insight into the nature of different types of uncertainty, their impact on performance, and how channel decisions can mitigate these impacts.
Resumo:
This paper presents a novel real-time power-device temperature estimation method that monitors the power MOSFET's junction temperature shift arising from thermal aging effects and incorporates the updated electrothermal models of power modules into digital controllers. Currently, the real-time estimator is emerging as an important tool for active control of device junction temperature as well as online health monitoring for power electronic systems, but its thermal model fails to address the device's ongoing degradation. Because of a mismatch of coefficients of thermal expansion between layers of power devices, repetitive thermal cycling will cause cracks, voids, and even delamination within the device components, particularly in the solder and thermal grease layers. Consequently, the thermal resistance of power devices will increase, making it possible to use thermal resistance (and junction temperature) as key indicators for condition monitoring and control purposes. In this paper, the predicted device temperature via threshold voltage measurements is compared with the real-time estimated ones, and the difference is attributed to the aging of the device. The thermal models in digital controllers are frequently updated to correct the shift caused by thermal aging effects. Experimental results on three power MOSFETs confirm that the proposed methodologies are effective to incorporate the thermal aging effects in the power-device temperature estimator with good accuracy. The developed adaptive technologies can be applied to other power devices such as IGBTs and SiC MOSFETs, and have significant economic implications.
Resumo:
A tanulmányban 25 ország, kétezres évek közepi állapotot tükröző, reprezentatív keresztmetszeti mintáin egyrészt a Duncan-Hoffman-féle modellre támaszkodva megvizsgáljuk, hogy adatbázisunk milyen mértékben tükrözi az illeszkedés bérhozamával foglalkozó irodalom legfontosabb empirikus következtetéseit, másrészt - a Hartog- Oosterbeek-szerzőpáros által javasolt statisztikai próbák segítségével - azt elemezzük, hogy a becslések eredményei alapján mit mondhatunk Mincer emberitőke-, valamint Thurow állásversenymodelljének érvényességéről. Heckman szelekciós torzítást kiküszöbölő becslőfüggvényén alapuló eredményeink jórészt megerősítik az irodalomban vázolt legfontosabb empirikus sajátosságokat, ugyanakkor a statisztikai próbák az országok többségére nézve cáfolják mind az emberi tőke, mind az állásverseny modelljének empirikus érvényességét. / === / Using the Duncan–Hoffman model, the paper estimates returns for educational mismatch using comparable micro data for 25 European countries. The aim is to gauge the extent to which the main empirical regularities shown in other papers on the subject are confirmed by this data base. Based on tests proposed by Hartog and Oosterbeek, the author also considers whether the observed empirical patterns accord with the Mincerian basic human-capital model and Thurow's job-competition model. Heckman's sample-selection estimator shows the returns to be fairly consistent with those found in the literature; the job-competition model and the Mincerian human-capital model can be rejected for most countries.
Resumo:
A szerzők tanulmányának középpontjában a közvetlen külföldi befektetések és a korrupció kapcsolata áll. Feltételezésük az, hogy a közvetlen külföldi befektetők a kevésbé korrupt országokat kedvelik, mivel a korrupció egy további kockázati tényezőt jelent a befektetők számára, amely növelheti a befektetések költségeit. Megítélésük szerint ezt kvantitatív módszerekkel érdemes vizsgálni, így elemzésük során 79 országot vizsgálnak meg tíz évre vonatkozó átlagokkal a Gretl-program és az OLS becslőfüggvény segítségével. Több modell lefuttatása után azt az eredményt kapták, hogy a közvetlen külföldi befektetők döntéseiben a korrupció szignifikáns tényező, a két változó között negatív korrelációt figyeltek meg. / === / The study focuses on the connection of Foreign Direct Investment and corruption. The authors assume that investors prefer countries where corruption level is lower, as corruption an additional risk factor that might increase the cost of investment. They believe that the best way to prove the previous statement if they use quantitative methods, so they set up a model where 79 countries are tested for 10 years averages, with the help of the Gretl and OLS estimator. After running several models their finding was that corruption is a significant factor in the decisions of foreign investors, and there is a negative correlation between corruption and FDI.
Resumo:
Tanulmányunk középpontjában a közvetlen külföldi befektetések és a korrupció kapcsolata áll. Feltételezésünk az, hogy a közvetlen külföldi befektetők a kevésbé korrupt országokat kedvelik, mivel a korrupció egy további kockázati tényezőt jelent a befektetők számára, amely növelheti a befektetések költségeit. Megítélésünk szerint ezt kvantitatív módszerekkel lehet a leginkább vizsgálni, így elemzésünk során 79 országot vizsgálunk meg 10 évre vonatkozó átlagokkal a GRETL program és az OLS becslőfüggvény segítségével. Több modell lefuttatása után azt az eredményt kaptuk, hogy a közvetlen külföldi befektetők döntéseiben a korrupció szignifikáns tényező, a két változó között negatív korrelációt figyelhetünk meg. ____ We assume that investors prefer countries where corruption level is lower, as corruption an additional risk factor that might increase the cost of investment. We believe that the best way to prove the previous statement if we use quantitative methods, so we set up a model where 79 countries are tested for 10 years averages, with the help of the GRETL and OLS estimator. After running several models our finding was that corruption is a significant factor in the decisions of foreign investors, and there is a negative correlation between corruption and FDI.
Resumo:
The market model is the most frequently estimated model in financial economics and has proven extremely useful in the estimation of systematic risk. In this era of rapid globalization of financial markets there has been a substantial increase in cross listings of stocks in foreign and regional capital markets. As many as a third to a half of the stocks in some major exchanges are foreign listed. The multiple listings of stocks has major implications for the estimation of systematic risk. The traditiona1 method of estimating the market model by using data from only one market will lead to misleading estimates of beta. This study demonstrates that the estimator for systematic risk and the methodology itself changes when stocks are listed in multiple markets. General expressions are developed to obtain the estimator of global beta under a variety of assumptions about the error terms of the market models for different capital markets. The assumptions pertain both to the volatilities of the abnormal returns in each market, and to the relationship between the markets. ^ Explicit expressions are derived for the estimation of global systematic risk beta when the returns are homoscedastic and also under different heteroscedastic conditions both within and/or between markets. These results for the estimation of global beta are further extended when return generating process follows an autoregressive scheme.^
Resumo:
Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^
Resumo:
This dissertation addresses three issues in the political economy of growth literature. The first study empirically tests the hypothesis that income inequality influences the size of a country's sovereign debt for a sample of developing countries for the period 1970–1990. The argument examined is that governments tend to yield to popular pressures to engage in redistributive policies, partially financed by foreign borrowing. Facing increased risk of default, international creditors limit the credit they extend, with the result that borrowing countries invest less and grow at a slower pace. The findings do not seem to support the negative relationship between inequality and sovereign debt, as there is evidence of increases in multilateral, countercyclical flows until the mid 1980s in Latin America. The hypothesis would hold for the period 1983–1990. Debt flows and levels seem to be positively correlated with growth as expected. ^ The second study empirically investigates the hypothesis that pronounced levels of inequality lead to unconsolidated democracies. We test the existence of a nonmonotonic relationship between inequality and democracy for a sample of Latin American countries for the period 1970–2000, where democracy appears to consolidate at some intermediate level of inequality. We find that the nonmonotonic relationship holds using instrumental variables methods. Bolivia seems to be a case of unconsolidated democracy. The positive relationship between per capita income and democracy disappears once fixed effects are introduced. ^ The third study explores the nonlinear relationship between per capita income and private saving levels in Latin America. Several estimation methods are presented; however, only the estimation of a dynamic specification through a state-of-the-art general method of moments estimator yields consistent estimates with increased efficiency. Results support the hypothesis that income positively affects private saving, while system GMM reveals nonlinear effects at income levels that exceed the ones included in this sample for the period 1960–1994. We also find that growth, government dissaving, and tightening of credit constraints have a highly significant and positive effect on private saving. ^
Resumo:
This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^
Resumo:
An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. ^ The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm's capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being.^ The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another.^ The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.^
Resumo:
Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^
Resumo:
The three-parameter lognormal distribution is the extension of the two-parameter lognormal distribution to meet the need of the biological, sociological, and other fields. Numerous research papers have been published for the parameter estimation problems for the lognormal distributions. The inclusion of the location parameter brings in some technical difficulties for the parameter estimation problems, especially for the interval estimation. This paper proposes a method for constructing exact confidence intervals and exact upper confidence limits for the location parameter of the three-parameter lognormal distribution. The point estimation problem is discussed as well. The performance of the point estimator is compared with the maximum likelihood estimator, which is widely used in practice. Simulation result shows that the proposed method is less biased in estimating the location parameter. The large sample size case is discussed in the paper.