77 resultados para Data Accessibility
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
The objective of this paper is to estimate the impact of residential job accessibility on female employment probability in the metropolitan areas of Barcelona and Madrid. Following a “spatial mismatch” framework, we estimate a female employment probability equation where variables controlling for personal characteristics, residential segregation and employment potential on public transport network are included. Data used come from Microcensus 2001 of INE (National Institute of Statistics). The research focuses on the treatment of endogeneity problems and the measurement of accessibility variables. Our results show that low job accessibility in public transport negatively affects employment probability. The intensity of this effect tends to decrease with individual’s educational attainment. A higher degree of residential segregation also reduces job probability in a significant way..
Resumo:
This paper contributes to the empirical literature on the effects of agglomeration and road accessibility on productivity of firms by looking at the case of Spain. We approach productivity indirectly by using individual wages allocated at the NUTS III level. We use a repeated cross-section of individual micro-data for the years 1995, 2002 and 2006. The availability of interprovincial travel time data for each of the three years allows controlling for transport improvements over the period by using a market potential variable. Additionally, agglomeration is approached by employment density and we control for localization economies, human capital externalities and a large set of individual and workplace characteristics. Estimating by instrumental variables, our results show a positive and significant effect of market accessibility on wages and non linear effect for employment density.
Resumo:
The most adequate approach for benchmarking web accessibility is manual expert evaluation supplemented by automatic analysis tools. But manual evaluation has a high cost and is impractical to be applied on large websites. In reality, there is no choice but to rely on automated tools when reviewing large web sites for accessibility. The question is: to what extent the results from automatic evaluation of a web site and individual web pages can be used as an approximation for manual results? This paper presents the initial results of an investigation aimed at answering this question. He have performed both manual and automatic evaluations of the accessibility of web pages of two sites and we have compared the results. In our data set automatically retrieved results could most definitely be used as an approximation manual evaluation results.
Resumo:
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.
Resumo:
En l’anàlisi de la supervivència el problema de les dades censurades en un interval es tracta, usualment,via l’estimació per màxima versemblança. Amb l’objectiu d’utilitzar una expressió simplificada de la funció de versemblança, els mètodes estàndards suposen que les condicions que produeixen la censura no afecten el temps de fallada. En aquest article formalitzem les condicions que asseguren la validesa d’aquesta versemblança simplificada. Així, precisem diferents condicions de censura no informativa i definim una condició de suma constant anàloga a la derivada en el context de censura per la dreta. També demostrem que les inferències obtingudes amb la versemblançaa simplificada són correctes quan aquestes condicions són certes. Finalment, tractem la identificabilitat de la funció distribució del temps de fallada a partir de la informació observada i estudiem la possibilitat de contrastar el compliment de la condició de suma constant.
Resumo:
Són molts els estudis que avui en dia incideixen en la necessitat d’oferir un suport metodològic i psicològic als aprenents que treballen de manera autònoma. L’objectiu d’aquest suport és ajudar-los a desenvolupar les destreses que necessiten per dirigir el seu aprenentatge així com una actitud positiva i una major conscienciació envers aquest aprenentatge. En definitiva, aquests dos tipus de preparació es consideren essencials per ajudar els aprenents a esdevenir més autònoms i més eficients en el seu propi aprenentatge. Malgrat això, si bé és freqüent trobar estudis que exemplifiquen aplicacions del suport metodològic dins els seus programes, principalment en la formació d’estratègies o ajudant els aprenents a desenvolupar un pla de treball, aquest no és el cas quan es tracta de la seva preparació psicològica. Amb rares excepcions, trobem estudis que documentin com s’incideix en les actituds i en les creences dels aprenents, també coneguts com a coneixement metacognitiu (CM), en programes que fomenten l’autonomia en l’aprenentatge. Els objectius d’aquest treball son dos: a) oferir una revisió d’estudis que han utilitzat diferents mitjans per incidir en el CM dels aprenents i b) descriure les febleses i avantatges dels procediments i instruments que utilitzen, tal com han estat valorats en estudis de recerca, ja que ens permetrà establir criteris objectius sobre com i quan utilitzar-los en programes que fomentin l’aprenentatge autodirigit.
Resumo:
We explore the determinants of usage of six different types of health care services, using the Medical Expenditure Panel Survey data, years 1996-2000. We apply a number of models for univariate count data, including semiparametric, semi-nonparametric and finite mixture models. We find that the complexity of the model that is required to fit the data well depends upon the way in which the data is pooled across sexes and over time, and upon the characteristics of the usage measure. Pooling across time and sexes is almost always favored, but when more heterogeneous data is pooled it is often the case that a more complex statistical model is required.
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
We construct estimates of educational attainment for a sample of OECD countries using previously unexploited sources. We follow a heuristic approach to obtain plausible time profiles for attainment levels by removing sharp breaks in the data that seem to reflect changes in classification criteria. We then construct indicators of the information content of our series and a number of previously available data sets and examine their performance in several growth specifications. We find a clear positive correlation between data quality and the size and significance of human capital coefficients in growth regressions. Using an extension of the classical errors in variables model, we construct a set of meta-estimates of the coefficient of years of schooling in an aggregate Cobb-Douglas production function. Our results suggest that, after correcting for measurement error bias, the value of this parameter is well above 0.50.
Resumo:
Based on an behavioral equilibrium exchange rate model, this paper examines the determinants of the real effective exchange rate and evaluates the degree of misalignment of a group of currencies since 1980. Within a panel cointegration setting, we estimate the relationship between exchange rate and a set of economic fundamentals, such as traded-nontraded productivity differentials and the stock of foreign assets. Having ascertained the variables are integrated and cointegrated, the long-run equilibrium value of the fundamentals are estimated and used to derive equilibrium exchange rates and misalignments. Although there is statistical homogeneity, some structural differences were found to exist between advanced and emerging economies.
Resumo:
We present experimental and theoretical analyses of data requirements for haplotype inference algorithms. Our experiments include a broad range of problem sizes under two standard models of tree distribution and were designed to yield statistically robust results despite the size of the sample space. Our results validate Gusfield's conjecture that a population size of n log n is required to give (with high probability) sufficient information to deduce the n haplotypes and their complete evolutionary history. The experimental results inspired our experimental finding with theoretical bounds on the population size. We also analyze the population size required to deduce some fixed fraction of the evolutionary history of a set of n haplotypes and establish linear bounds on the required sample size. These linear bounds are also shown theoretically.
Resumo:
L’anàlisi de l’efecte dels gens i els factors ambientals en el desenvolupament de malalties complexes és un gran repte estadístic i computacional. Entre les diverses metodologies de mineria de dades que s’han proposat per a l’anàlisi d’interaccions una de les més populars és el mètode Multifactor Dimensionality Reduction, MDR, (Ritchie i al. 2001). L’estratègia d’aquest mètode és reduir la dimensió multifactorial a u mitjançant l’agrupació dels diferents genotips en dos grups de risc: alt i baix. Tot i la seva utilitat demostrada, el mètode MDR té alguns inconvenients entre els quals l’agrupació excessiva de genotips pot fer que algunes interaccions importants no siguin detectades i que no permet ajustar per efectes principals ni per variables confusores. En aquest article il•lustrem les limitacions de l’estratègia MDR i d’altres aproximacions no paramètriques i demostrem la conveniència d’utilitzar metodologies parametriques per analitzar interaccions en estudis cas-control on es requereix l’ajust per variables confusores i per efectes principals. Proposem una nova metodologia, una versió paramètrica del mètode MDR, que anomenem Model-Based Multifactor Dimensionality Reduction (MB-MDR). La metodologia proposada té com a objectiu la identificació de genotips específics que estiguin associats a la malaltia i permet ajustar per efectes marginals i variables confusores. La nova metodologia s’il•lustra amb dades de l’Estudi Espanyol de Cancer de Bufeta.