908 resultados para Informal inference
Resumo:
The informal economy is a very important sector of the Indian economy. The National Council of Applied Economic Research estimates that the informal sector - "unorganised sector" - generates about 62% of GDP and provides for about 55% of total employment (ILO 2002, p. 14). This paper studies the characteristics of the workers in the informal economy and whether internal migrants treat this sector as a temporary location before moving on to the organised or formal sector to improve their lifetime income and living conditions. We limit our study to the Indian urban (non-agricultural) sector and study the characteristics of the household heads that belong to the informal sector (self-employed and informal wage workers) and the formal sector. We find that household heads that are less educated, come from poorer households, and/or are in lower social groups (castes and religions) are more likely to be in the informal sector. In addition, our results show strong evidence that the longer a rural migrant household head has been working in the urban sector, ceteris paribus, the more likely that individual has moved out of the informal wage sector. These results support the hypothesis that, for internal migrants, the informal wage labour market is a stepping stone to a better and more certain life in the formal sector.
Resumo:
The presence of a large informal sector in developing economies poses the question of whether informal activity produces agglomeration externalities. This paper uses data on all the nonfarm establishments and enterprises in Cambodia to estimate the impact of informal agglomeration on the regional economic performance of formal and informal firms. We develop a Bayesian approach for a spatial autoregressive model with an endogenous explanatory variable to address endogeneity and spatial dependence. We find a significantly positive effect of informal agglomeration, where informal firms gain more strongly than formal firms. Calculating the spatial marginal effects of increased agglomeration, we demonstrate that more accessible regions are more likely than less accessible regions to benefit strongly from informal agglomeration.
Resumo:
Since the abolition of the official peg and the introduction of a managed float in April 2012, the Central Bank of Myanmar has operated the daily two–way auctions of foreign exchange aimed at smoothing exchange rate fluctuations. Despite the reforms to the foreign exchange regime, however, informal trading of foreign exchange remains pervasive. Using the daily informal exchange rate and Central Bank auction data, this study examines the impacts of auctions on the informal market rate. First, a VAR analysis indicates that the official rate did not Granger cause the informal rate. Second, GARCH models indicate that the auctions did not reduce the conditional variance of the informal rate returns. Overall, the auctions have only a quite modest impact on the informal exchange rate.
Resumo:
Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications—it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ‘Activity Monitor’ has been designed and implemented: a personal health-persuasive application that provides feedback on the user’s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user’s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.d
Resumo:
In this work, we propose the Seasonal Dynamic Factor Analysis (SeaDFA), an extension of Nonstationary Dynamic Factor Analysis, through which one can deal with dimensionality reduction in vectors of time series in such a way that both common and specific components are extracted. Furthermore, common factors are able to capture not only regular dynamics (stationary or not) but also seasonal ones, by means of the common factors following a multiplicative seasonal VARIMA(p, d, q) × (P, D, Q)s model. Additionally, a bootstrap procedure that does not need a backward representation of the model is proposed to be able to make inference for all the parameters in the model. A bootstrap scheme developed for forecasting includes uncertainty due to parameter estimation, allowing enhanced coverage of forecasting intervals. A challenging application is provided. The new proposed model and a bootstrap scheme are applied to an innovative subject in electricity markets: the computation of long-term point forecasts and prediction intervals of electricity prices. Several appendices with technical details, an illustrative example, and an additional table are available online as Supplementary Materials.
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once),or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms,for which we give a complete satisfiability testing algorithm, w.r.t. available type information. Information about determinacy can be used for program debugging and optimization, resource consumption and granularity control, abstraction carrying code, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.
Resumo:
When mapping is formulated in a Bayesian framework, the need of specifying a prior for the environment arises naturally. However, so far, the use of a particular structure prior has been coupled to working with a particular representation. We describe a system that supports inference with multiple priors while keeping the same dense representation. The priors are rigorously described by the user in a domain-specific language. Even though we work very close to the measurement space, we are able to represent structure constraints with the same expressivity as methods based on geometric primitives. This approach allows the intrinsic degrees of freedom of the environment’s shape to be recovered. Experiments with simulated and real data sets will be presented
Resumo:
The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.
Resumo:
Abstract is not available.