944 resultados para Quantitative Methods
Resumo:
Re-licensing requirements for professionals that move across borders arewidespread. In this paper, we measure the returns to an occupationallicense using novel data on Soviet trained physicians that immigrated toIsrael. An immigrant re-training assignment rule used by the IsraelMinistry of Health provides an exogenous source of variation inre-licensing outcomes. Instrumental variables and quantile treatmenteffects estimates of the returns to an occupational license indicate excesswages due to occupational entry restrictions and negative selectioninto licensing status. We develop a model of optimal license acquisitionwhich suggests that the wages of high-skilled immigrant physicians in thenonphysician sector outweigh the lower direct costs that these immigrantsface in acquiring a medical license. Licensing thus leads to lower averagequality of service. However, the positive earnings effect of entry restrictionsfar outweighs the lower practitioner quality earnings effect that licensinginduces.
Resumo:
We present simple procedures for the prediction of a real valued sequence. The algorithms are based on a combinationof several simple predictors. We show that if the sequence is a realization of a bounded stationary and ergodic random process then the average of squared errors converges, almost surely, to that of the optimum, given by the Bayes predictor. We offer an analog result for the prediction of stationary gaussian processes.
Resumo:
It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.
Resumo:
In this paper we propose a subsampling estimator for the distribution ofstatistics diverging at either known rates when the underlying timeseries in strictly stationary abd strong mixing. Based on our results weprovide a detailed discussion how to estimate extreme order statisticswith dependent data and present two applications to assessing financialmarket risk. Our method performs well in estimating Value at Risk andprovides a superior alternative to Hill's estimator in operationalizingSafety First portofolio selection.
Resumo:
We introduce a variation of the proof for weak approximations that issuitable for studying the densities of stochastic processes which areevaluations of the flow generated by a stochastic differential equation on a random variable that maybe anticipating. Our main assumption is that the process and the initial random variable have to be smooth in the Malliavin sense. Furthermore if the inverse of the Malliavin covariance matrix associated with the process under consideration is sufficiently integrable then approximations fordensities and distributions can also be achieved. We apply theseideas to the case of stochastic differential equations with boundaryconditions and the composition of two diffusions.
Resumo:
The educational system in Spain is undergoing a reorganization. At present, high-school graduates who want to enroll at a public university must take a set of examinations Pruebas de Aptitud para el Acceso a la Universidad (PAAU). A "new formula" (components, weights, type of exam,...) for university admission is been discussed. The present paper summarizes part of the research done by the author in her PhD. The context for this thesis is the evaluation of large-scale and complex systems of assessment. The main objectives were: to achieve a deep knowledge of the entire university admissions process in Spain, to discover the main sources of uncertainty and topromote empirical research in a continual improvement of the entire process. Focusing in the suitable statistical models and strategies which allow to high-light the imperfections of the system and reduce them, the paper develops, among other approaches, some applications of multilevel modeling.
Resumo:
The use of simple and multiple correspondence analysis is well-established in socialscience research for understanding relationships between two or more categorical variables.By contrast, canonical correspondence analysis, which is a correspondence analysis with linearrestrictions on the solution, has become one of the most popular multivariate techniques inecological research. Multivariate ecological data typically consist of frequencies of observedspecies across a set of sampling locations, as well as a set of observed environmental variablesat the same locations. In this context the principal dimensions of the biological variables aresought in a space that is constrained to be related to the environmental variables. Thisrestricted form of correspondence analysis has many uses in social science research as well,as is demonstrated in this paper. We first illustrate the result that canonical correspondenceanalysis of an indicator matrix, restricted to be related an external categorical variable, reducesto a simple correspondence analysis of a set of concatenated (or stacked ) tables. Then weshow how canonical correspondence analysis can be used to focus on, or partial out, aparticular set of response categories in sample survey data. For example, the method can beused to partial out the influence of missing responses, which usually dominate the results of amultiple correspondence analysis.
Resumo:
In this work we study older workers (50 64) labor force transitions after a health/disability shock. We find that the probability of keeping working decreases with both age and severity of the shock. Moreover, we find strong interactions between age and severity in the 50 64 age range and none in the 30 49 age range. Regarding demographics we find that being female and married reduce the probability of keeping work. On the contrary, being main breadwinner, education and skill levels increase it. Interestingly, the effect of some demographics changes its sign when we look at transitions from inactivity to work. This is the case of being married or having a working spouse. Undoubtedly, leisure complementarities should play a role in the latter case. Since the data we use contains a very detailed information on disabilities, we are able to evaluate the marginal effect of each type of disability either in the probability of keeping working or in returning back to work. Some of these results may have strong policy implications.
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
Sequential randomized prediction of an arbitrary binary sequence isinvestigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss, i.e., to make (almost) as few mistakes as the best ``expert'' in a fixed, possibly infinite, set of experts. We point out a surprising connection between this prediction problem and empirical process theory. First, in the special case of static (memoryless) experts, we completely characterize the minimax relative loss in terms of the maximum of an associated Rademacher process. Then we show general upper and lower bounds on the minimaxrelative loss in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax relative loss for the class of autoregressive linear predictors and for the class of Markov experts.
Resumo:
This paper presents an Optimised Search Heuristic that combines a tabu search method with the verification of violated valid inequalities. The solution delivered by the tabu search is partially destroyed by a randomised greedy procedure, and then the valid inequalities are used to guide the reconstruction of a complete solution. An application of the new method to the Job-Shop Scheduling problem is presented.
Resumo:
We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
Theorem 1 of Euler s paper of 1737 'Variae Observationes Circa Series Infinitas', states the astonishing result that the series of all unit fractions whose denominators are perfect powers of integers minus unity has sum one. Euler attributes the Theorem to Goldbach. The proof is one of those examples of misuse of divergent series to obtain correct results so frequent during the seventeenth and eighteenth centuries. We examine this proof closelyand, with the help of some insight provided by a modern (and completely dierent) proof of the Goldbach-Euler Theorem, we present a rational reconstruction in terms which could be considered rigorous by modern Weierstrassian standards. At the same time, with a few ideas borrowed from nonstandard analysis we see how the same reconstruction can be also be considered rigorous by modern Robinsonian standards. This last approach, though, is completely in tune with Goldbach and Euler s proof. We hope to convince the reader then how, a few simple ideas from nonstandard analysis, vindicate Euler's work.
Resumo:
This work presents an application of the multilevel analysis techniques tothe study of the abstention in the 2000 Spanish general election. Theinterest of the study is both, substantive and methodological. From thesubstantive point of view the article intends to explain the causes ofabstention and analyze the impact of associationism on it. From themethodological point of view it is intended to analyze the interaction betweenindividual and context with a modelisation that takes into account thehierarchical structure of data. The multilevel study of this paper validatesthe one level results obtained in previous analysis of the abstention andshows that only a fraction of the differences in abstention are explained bythe individual characteristics of the electors. Another important fraction ofthese differences is due to the political and social characteristics of thecontext. Relating to associationism, the data suggest that individualparticipation in associations decrease the probability of abstention. However,better indicators are needed in order to catch more properly the effect ofassociationism in electoral behaviour.