107 resultados para Subordinated Markov process
Resumo:
The work studies a general multiserver queue in which the service time of an arriving customer and the next interarrival period may depend on both the current waiting time and the server assigned to the arriving customer. Stability of the system is proved under general assumptions on the predetermined distributions describing the model. The proof exploits a combination of the Markov property of the workload process with a regenerative property of the process. The key idea leading to stability is a characterization of the limit behavior of the forward renewal process generated by regenerations. Extensions of the basic model are also studied.
Resumo:
While much of the literature on immigrants' assimilation has focused on countries with a large tradition of receiving immigrants and with flexible labor markets, very little is known on how immigrants adjust to other types of host economies. With its severe dual labor market, and an unprecedented immigration boom, Spain presents a quite unique experience to analyze immigrations' assimilation process. Using data from the 2000 to 2008 Labor Force Survey, we find that immigrants are more occupationally mobile than natives, and that much of this greater flexibility is explained by immigrants' assimilation process soon after arrival. However, we find little evidence of convergence, especially among women and high skilled immigrants. This suggests that instead of integrating, immigrants occupationally segregate, providing evidence consistent with both imperfect substitutability and immigrants' human capital being under-valued. Additional evidence on the assimilation of earnings and the incidence of permanent employment by different skill levels also supports the hypothesis of segmented labor markets.
Resumo:
The 1990s witnessed the launching of two ambitious trade regionalization plans, the Nafta and EU enlargement to Central and Eastern Europe. In contrast to previous projects for the creation or expansion of regional trade blocs, these two projects concerned states at dramatically different levels of economic development: The Nafta involved the very wealthy economies of Canada and the USA and the significantly poorer economy of Mexico, whereas EU enlargement involved the very wealthy economy of the 15 member-state European Union and the significantly poorer economies of former Communist states in Central and Eastern Europe. Ultimately, the Nafta and EU enlargement are responses to the challenges of globalization. Paradoxically, however, they have been met with radically different societal reactions in the wealthy partners that participated in the launching of these processes. This paper focuses on the reaction by labor unions on both sides of the Atlantic. I conclude that while labor relations and welfare institutions constrained the trade policy choices made by labor unions in the United States and Europe, they do not tell the whole story. It would seem that United States labor unions were more sensitive to the potential risks for workers associated to the liberalization of trade than were their European counterparts.
Resumo:
This research project gathers several ideas and guidelines on professional improvement as a teacher. This study includes two empirical studies. The first one focuses mainly on the teacher's figure. It is meant to be a study of the several resources that the teacher uses in order to construct the student's knowledge in an English classroom context. The second empirical study focuses on the students. It is a study on how students learn cooperatively by analyzing their oral productions when working in small groups
Resumo:
En aquest projecte es proposa un algorisme de detecció de pell que introdueix el veïnatge a l’hora de classificar píxels. Partim d’un espai de color invariant après a partir de múltiples vistes i introduïm la influència del veïnatge mitjançant camps aleatoris de Markov. A partir dels experiments realitzats podem concloure que la inclusió del veïnatge en el procés de classificació de píxels millora significativament els resultats de detecció.
Resumo:
The aim of this paper is to analyse the effects of human capital, advanced manufacturing technologies (AMT), and new work organizational practices on firm productivity, while taking into account the synergies existing between them. This study expands current knowledge in this area in two ways. First, in contrast with previous works, we focus on AMT and not ICT (information and communication technologies). Second, we use a unique employer-employee data set for small firms in a particular area of southern Europe (Catalonia, Spain). Using a small firm data set, allows us to analyse the particular case of small and medium enterprises, since we cannot assume they have the same characteristics as large firms. The results provide evidence in favor of the complementarity hypothesis between human capital, advanced manufacturing technologies, and new work organization practices, although we show that the complementarity effects depend on what type of work organization practices are used by a firm. For small and medium Catalan firms, the only set of work organization practices that improve the benefits of human capital and technology investment are those practices which are more quality oriented, such as quality circles, problem-solving groups or total quality management.
Resumo:
Construcció d'una aplicació web a partir de les especificacions d'un client imaginari. Estudi i utilització del mètode Rational Unified Process, el més habitual actualment en la construcció de software. Disseny d'una base de dades i implementació del model lògic mitjançant un SGBD punter al mercat com Oracle.
Resumo:
Personalization in e-learning allows the adaptation of contents, learning strategiesand educational resources to the competencies, previous knowledge or preferences of the student. This project takes a multidisciplinary perspective for devising standards-based personalization capabilities into virtual e-learning environments, focusing on the conceptof adaptive learning itinerary, using reusable learning objects as the basis of the system and using ontologies and semantic web technologies.
Resumo:
Supervisory systems evolution makes the obtaining of significant information from processes more important in the way that the supervision systems' particular tasks are simplified. So, having signal treatment tools capable of obtaining elaborate information from the process data is important. In this paper, a tool that obtains qualitative data about the trends and oscillation of signals is presented. An application of this tool is presented as well. In this case, the tool, implemented in a computer-aided control systems design (CACSD) environment, is used in order to give to an expert system for fault detection in a laboratory plant
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced