477 resultados para causality probability
Resumo:
Principal Topic: Entrepreneurship is key to employment, innovation and growth (Acs & Mueller, 2008), and as such, has been the subject of tremendous research in both the economic and management literatures since Solow (1957), Schumpeter (1934, 1943), and Penrose (1959). The presence of entrepreneurs in the economy is a key factor in the success or failure of countries to grow (Audretsch and Thurik, 2001; Dejardin, 2001). Further studies focus on the conditions of existence of entrepreneurship, influential factors invoked are historical, cultural, social, institutional, or purely economic (North, 1997; Thurik 1996 & 1999). Of particular interest, beyond the reasons behind the existence of entrepreneurship, are entrepreneurial survival and good ''performance'' factors. Using cross-country firm data analysis, La Porta & Schleifer (2008) confirm that informal micro-businesses provide on average half of all economic activity in developing countries. They find that these are utterly unproductive compared to formal firms, and conclude that the informal sector serves as a social security net ''keep[ing] millions of people alive, but disappearing over time'' (abstract). Robison (1986), Hill (1996, 1997) posit that the Indonesian government under Suharto always pointed to the lack of indigenous entrepreneurship , thereby motivating the nationalisation of all industries. Furthermore, the same literature also points to the fact that small businesses were mostly left out of development programmes because they were supposed less productive and having less productivity potential than larger ones. Vial (2008) challenges this view and shows that small firms represent about 70% of firms, 12% of total output, but contribute to 25% of total factor productivity growth on average over the period 1975-94 in the industrial sector (Table 10, p.316). ---------- Methodology/Key Propositions: A review of the empirical literature points at several under-researched questions. Firstly, we assess whether there is, evidence of small family-business entrepreneurship in Indonesia. Secondly, we examine and present the characteristics of these enterprises, along with the size of the sector, and its dynamics. Thirdly, we study whether these enterprises underperform compared to the larger scale industrial sector, as it is suggested in the literature. We reconsider performance measurements for micro-family owned businesses. We suggest that, beside productivity measures, performance could be appraised by both the survival probability of the firm, and by the amount of household assets formation. We compare micro-family-owned and larger industrial firms' survival probabilities after the 1997 crisis, their capital productivity, then compare household assets of families involved in business with those who do not. Finally, we examine human and social capital as moderators of enterprises' performance. In particular, we assess whether a higher level of education and community participation have an effect on the likelihood of running a family business, and whether it has an impact on households' assets level. We use the IFLS database compiled and published by RAND Corporation. The data is a rich community, households, and individuals panel dataset in four waves: 1993, 1997, 2000, 2007. We now focus on the waves 1997 and 2000 in order to investigate entrepreneurship behaviours in turbulent times, i.e. the 1997 Asian crisis. We use aggregate individual data, and focus on households data in order to study micro-family-owned businesses. IFLS data covers roughly 7,600 households in 1997 and over 10,000 households in 2000, with about 95% of 1997 households re-interviewed in 2000. Households were interviewed in 13 of the 27 provinces as defined before 2001. Those 13 provinces were targeted because accounting for 83% of the population. A full description of the data is provided in Frankenberg and Thomas (2000), and Strauss et alii (2004). We deflate all monetary values in Rupiah with the World Development Indicators Consumer Price Index base 100 in 2000. ---------- Results and Implications: We find that in Indonesia, entrepreneurship is widespread and two thirds of households hold one or several family businesses. In rural areas, in 2000, 75% of households run one or several businesses. The proportion of households holding both a farm and a non farm business is higher in rural areas, underlining the reliance of rural households on self-employment, especially after the crisis. Those businesses come in various sizes from very small to larger ones. The median business production value represents less than the annual national minimum wage. Figures show that at least 75% of farm businesses produce less than the annual minimum wage, with non farm businesses being more numerous to produce the minimum wage. However, this is only one part of the story, as production is not the only ''output'' or effect of the business. We show that the survival rate of those businesses ranks between 70 and 82% after the 1997 crisis, which contrasts with the 67% survival rate for the formal industrial sector (Ter Wengel & Rodriguez, 2006). Micro Family Owned Businesses might be relatively small in terms of production, they also provide stability in times of crisis. For those businesses that provide business assets figures, we show that capital productivity is fairly high, with rates that are ten times higher for non farm businesses. Results show that households running a business have larger family assets, and households are better off in urban areas. We run a panel logit model in order to test the effect of human and social capital on the existence of businesses among households. We find that non farm businesses are more likely to appear in households with higher human and social capital situated in urban areas. Farm businesses are more likely to appear in lower human capital and rural contexts, while still being supported by community participation. The estimation of our panel data model confirm that households are more likely to have higher family assets if situated in urban area, the higher the education level, the larger the assets, and running a business increase the likelihood of having larger assets. This is especially true for non farm businesses that have a clearly larger and more significant effect on assets than farm businesses. Finally, social capital in the form of community participation also has a positive effect on assets. Those results confirm the existence of a strong entrepreneurship culture among Indonesian households. Investigating survival rates also shows that those businesses are quite stable, even in the face of a violent crisis such as the 1997 one, and as a result, can provide a safety net. Finally, considering household assets - the returns of business to the household, rather than profit or productivity - the returns of business to itself, shows that households running a business are better off. While we demonstrate that uman and social capital are key to business existence, survival and performance, those results open avenues for further research regarding the factors that could hamper growth of those businesses in terms of output and employment.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
Boards of directors are thought to provide access to a wealth of knowledge and resources for the companies they serve, and are considered important to corporate governance. Under the Resource Based View (RBV) of the firm (Wernerfelt, 1984) boards are viewed as a strategic resource available to firms. As a consequence there has been a significant research effort aimed at establishing a link between board attributes and company performance. In this thesis I explore and extend the study of interlocking directorships (Mizruchi, 1996; Scott 1991a) by examining the links between directors’ opportunity networks and firm performance. Specifically, I use resource dependence theory (Pfeffer & Salancik, 1978) and social capital theory (Burt, 1980b; Coleman, 1988) as the basis for a new measure of a board’s opportunity network. I contend that both directors’ formal company ties and their social ties determine a director’s opportunity network through which they are able to access and mobilise resources for their firms. This approach is based on recent studies that suggest the measurement of interlocks at the director level, rather than at the firm level, may be a more reliable indicator of this phenomenon. This research uses publicly available data drawn from Australia’s top-105 listed companies and their directors in 1999. I employ Social Network Analysis (SNA) (Scott, 1991b) using the UCINET software to analyse the individual director’s formal and social networks. SNA is used to measure a the number of ties a director has to other directors in the top-105 company director network at both one and two degrees of separation, that is, direct ties and indirect (or ‘friend of a friend’) ties. These individual measures of director connectedness are aggregated to produce a board-level network metric for comparison with measures of a firm’s performance using multiple regression analysis. Performance is measured with accounting-based and market-based measures. Findings indicate that better-connected boards are associated with higher market-based company performance (measured by Tobin’s q). However, weaker and mostly unreliable associations were found for accounting-based performance measure ROA. Furthermore, formal (or corporate) network ties are a stronger predictor of market performance than total network ties (comprising social and corporate ties). Similarly, strong ties (connectedness at degree-1) are better predictors of performance than weak ties (connectedness at degree-2). My research makes four contributions to the literature on director interlocks. First, it extends a new way of measuring a board’s opportunity network based on the director rather than the company as the unit of interlock. Second, it establishes evidence of a relationship between market-based measures of firm performance and the connectedness of that firm’s board. Third, it establishes that director’s formal corporate ties matter more to market-based firm performance than their social ties. Fourth, it establishes that director’s strong direct ties are more important to market-based performance than weak ties. The thesis concludes with implications for research and practice, including a more speculative interpretation of these results. In particular, I raise the possibility of reverse causality – that is networked directors seek to join high-performing companies. Thus, the relationship may be a result of symbolic action by companies seeking to increase the legitimacy of their firms rather than a reflection of the social capital available to the companies. This is an important consideration worthy of future investigation.
Resumo:
The sinking of the Titanic in April 1912 took the lives of 68 percent of the people aboard. Who survived? It was women and children who had a higher probability of being saved, not men. Likewise, people traveling in first class had a better chance of survival than those in second and third class. British passengers were more likely to perish than members of other nations. This extreme event represents a rare case of a well-documented life and death situation where social norms were enforced. This paper shows that economic analysis can account for human behavior in such situations.
Resumo:
uring periods of market stress, electricity prices can rise dramatically. Electricity retailers cannot pass these extreme prices on to customers because of retail price regulation. Improved prediction of these price spikes therefore is important for risk management. This paper builds a time-varying-probability Markov-switching model of Queensland electricity prices, aimed particularly at forecasting price spikes. Variables capturing demand and weather patterns are used to drive the transition probabilities. Unlike traditional Markov-switching models that assume normality of the prices in each state, the model presented here uses a generalised beta distribution to allow for the skewness in the distribution of electricity prices during high-price episodes.
Resumo:
This paper illustrates the prediction of opponent behaviour in a competitive, highly dynamic, multi-agent and partially observable environment, namely RoboCup small size league robot soccer. The performance is illustrated in the context of the highly successful robot soccer team, the RoboRoos. The project is broken into three tasks; classification of behaviours, modelling and prediction of behaviours and integration of the predictions into the existing planning system. A probabilistic approach is taken to dealing with the uncertainty in the observations and with representing the uncertainty in the prediction of the behaviours. Results are shown for a classification system using a Naïve Bayesian Network that determines the opponent’s current behaviour. These results are compared to an expert designed fuzzy behaviour classification system. The paper illustrates how the modelling system will use the information from behaviour classification to produce probability distributions that model the manner with which the opponents perform their behaviours. These probability distributions are show to match well with the existing multi-agent planning system (MAPS) that forms the core of the RoboRoos system.
Resumo:
Why so many people pay their taxes, even though fines and audit probability are low, is a central question in the tax compliance literature. Positing a homo oeconomicus having a refined motivation structure sheds light on this puzzle. This paper provides empirical evidence for the relevance of conditional cooperation, using survey data from 30 West and East European countries. We find a high correlation between perceived tax evasion and tax morale. The results remain robust after exploiting endogeneity and conducting several robustness tests. We also observe a strong positive correlation between institutional quality and tax mmorale. Keywords: Tax morale; Tax compliance; Tax evasion; Pro-social behavior; Institutions
Resumo:
Objectives: The objectives of this study were to specifically investigate the differences in culture, attitudes and social networks between Australian and Taiwanese men and women and identify the factors that predict midlife men and women’s quality of life in both countries. Methods: A stratified random sample strategy based on probability proportional sampling (PPS) was conducted to investigate 278 Australian and 398 Taiwanese midlife men and women’s quality of life. Multiple regression modelling and classification and regression trees (CARTs) were performed to examine the potential differences on culture, attitude, social networks, social demographic factors and religion/spirituality in midlife men and women’s quality of life in both Australia and Taiwan. Results: The results of this study suggest that culture involves multiple functions and interacts with attitudes, social networks and individual factors to influence a person’s quality of life. Significant relationships were found between the interaction between cultural circumstances and a person’s internal and external factors. The research found that good social support networks and a healthy optimistic disposition may significantly enhance midlife men and women’s quality of life. Conclusion: The study indicated that there is a significant relationship between culture, attitude, social networks and quality of life in midlife Australian and Taiwanese men and women. People who had higher levels of horizontal individualism and collectivism, positive attitudes and better social support had better psychological, social, physical and environmental health, while it emerged that vertical individualists with competitive characteristics would experience a lower quality of life. This study has highlighted areas where opportunities exist to further reflect upon contemporary social health policies for Australian and Taiwanese societies and also within the global perspective, in order to provide enhanced quality care for growing midlife populations.
Resumo:
In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.
Resumo:
Spectrum sensing is considered to be one of the most important tasks in cognitive radio. Many sensing detectors have been proposed in the literature, with the common assumption that the primary user is either fully present or completely absent within the window of observation. In reality, there are scenarios where the primary user signal only occupies a fraction of the observed window. This paper aims to analyse the effect of the primary user duty cycle on spectrum sensing performance through the analysis of a few common detectors. Simulations show that the probability of detection degrades severely with reduced duty cycle regardless of the detection method. Furthermore we show that reducing the duty cycle has a greater degradation on performance than lowering the signal strength.
Resumo:
We have developed a new experimental method for interrogating statistical theories of music perception by implementing these theories as generative music algorithms. We call this method Generation in Context. This method differs from most experimental techniques in music perception in that it incorporates aesthetic judgments. Generation In Context is designed to measure percepts for which the musical context is suspected to play an important role. In particular the method is suitable for the study of perceptual parameters which are temporally dynamic. We outline a use of this approach to investigate David Temperley’s (2007) probabilistic melody model, and provide some provisional insights as to what is revealed about the model. We suggest that Temperley’s model could be improved by dynamically modulating the probability distributions according to the changing musical context.
Resumo:
Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.
Resumo:
This thesis discusses various aspects of the integrity monitoring of GPS applied to civil aircraft navigation in different phases of flight. These flight phases include en route, terminal, non-precision approach and precision approach. The thesis includes four major topics: probability problem of GPS navigation service, risk analysis of aircraft precision approach and landing, theoretical analysis of Receiver Autonomous Integrity Monitoring (RAIM) techniques and RAIM availability, and GPS integrity monitoring at a ground reference station. Particular attention is paid to the mathematical aspects of the GPS integrity monitoring system. The research has been built upon the stringent integrity requirements defined by civil aviation community, and concentrates on the capability and performance investigation of practical integrity monitoring systems with rigorous mathematical and statistical concepts and approaches. Major contributions of this research are: • Rigorous integrity and continuity risk analysis for aircraft precision approach. Based on the joint probability density function of the affecting components, the integrity and continuity risks of aircraft precision approach with DGPS were computed. This advanced the conventional method of allocating the risk probability. • A theoretical study of RAIM test power. This is the first time a theoretical study on RAIM test power based on the probability statistical theory has been presented, resulting in a new set of RAIM criteria. • Development of a GPS integrity monitoring and DGPS quality control system based on GPS reference station. A prototype of GPS integrity monitoring and DGPS correction prediction system has been developed and tested, based on the A USN A V GPS base station on the roof of QUT ITE Building.
Resumo:
Reliability analysis has several important engineering applications. Designers and operators of equipment are often interested in the probability of the equipment operating successfully to a given age - this probability is known as the equipment's reliability at that age. Reliability information is also important to those charged with maintaining an item of equipment, as it enables them to model and evaluate alternative maintenance policies for the equipment. In each case, information on failures and survivals of a typical sample of items is used to estimate the required probabilities as a function of the item's age, this process being one of many applications of the statistical techniques known as distribution fitting. In most engineering applications, the estimation procedure must deal with samples containing survivors (suspensions or censorings); this thesis focuses on several graphical estimation methods that are widely used for analysing such samples. Although these methods have been current for many years, they share a common shortcoming: none of them is continuously sensitive to changes in the ages of the suspensions, and we show that the resulting reliability estimates are therefore more pessimistic than necessary. We use a simple example to show that the existing graphical methods take no account of any service recorded by suspensions beyond their respective previous failures, and that this behaviour is inconsistent with one's intuitive expectations. In the course of this thesis, we demonstrate that the existing methods are only justified under restricted conditions. We present several improved methods and demonstrate that each of them overcomes the problem described above, while reducing to one of the existing methods where this is justified. Each of the improved methods thus provides a realistic set of reliability estimates for general (unrestricted) censored samples. Several related variations on these improved methods are also presented and justified. - i
Resumo:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.