31 resultados para one-repetition maximum
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
El objetivo de este trabajo fue el de evaluar la deposición transversal de caldo de las boquillas pulverizadoras de doble abanico TTJ60-11004 y TTJ60-11002 en distintas condiciones operacionales. Se utilizaron 5 muestras de cada boquilla pulverizadora siendo considerada cada unidad, una repetición. La distribución de caldo fue evaluada por medio de una mesa de evaluación de distribución construida de acuerdo con la norma ISO 56821. Se evaluó el perfil de distribución individual, la distribución volumétrica simulada de la superposición de los chorros por medio del coeficiente de variación (CV%) de los volúmenes colectados, el caudal y el ángulo de abertura de los chorros. Las condiciones operacionales fueron: presión de trabajo de 200, 300 y 400 Kpa, altura de 30, 40 y 50 cm en relación al blanco y espaciamiento entre boquillas simulados en Software (Microsoft Excel) entre 45 y 100 cm. Las boquillas presentaron perfil individual descontinuo con la mayor deposición de líquido en la región central y reducción del volumen gradual en dirección a las extremidades. El aumento de la presión promovió alargamiento del perfil y de la franja de aplicación. Las boquillas proporcionaron perfil uniforme que dependió del espaciamiento entre las boquillas, con valores menores con reducción en el espaciamiento y en presiones mayores. El caudal y el ángulo del chorro aumentaron con el incremento en la presión.
Resumo:
The standard one-machine scheduling problem consists in schedulinga set of jobs in one machine which can handle only one job at atime, minimizing the maximum lateness. Each job is available forprocessing at its release date, requires a known processing timeand after finishing the processing, it is delivery after a certaintime. There also can exists precedence constraints between pairsof jobs, requiring that the first jobs must be completed beforethe second job can start. An extension of this problem consistsin assigning a time interval between the processing of the jobsassociated with the precedence constrains, known by finish-starttime-lags. In presence of this constraints, the problem is NP-hardeven if preemption is allowed. In this work, we consider a specialcase of the one-machine preemption scheduling problem with time-lags, where the time-lags have a chain form, and propose apolynomial algorithm to solve it. The algorithm consist in apolynomial number of calls of the preemption version of the LongestTail Heuristic. One of the applicability of the method is to obtainlower bounds for NP-hard one-machine and job-shop schedulingproblems. We present some computational results of thisapplication, followed by some conclusions.
Resumo:
"Beauty-contest" is a game in which participants have to choose, typically, a number in [0,100], the winner being the person whose number is closest to a proportion of the average of all chosen numbers. We describe and analyze Beauty-contest experiments run in newspapers in UK, Spain, and Germany and find stable patterns of behavior across them, despite the uncontrollability of these experiments. These results are then compared with lab experiments involving undergraduates and game theorists as subjects, in what must be one of the largest empirical corroborations of interactive behavior ever tried. We claim that all observed behavior, across a wide variety of treatments and subject pools, can be interpretedas iterative reasoning. Level-1 reasoning, Level-2 reasoning and Level-3 reasoning are commonly observed in all the samples, while the equilibrium choice (Level-Maximum reasoning) is only prominently chosen by newspaper readers and theorists. The results show the empirical power of experiments run with large subject-pools, and open the door for more experimental work performed on the rich platform offered by newspapers and magazines.
Resumo:
We investigate the hypothesis that the atmosphere is constrained to maximize its entropy production by using a one-dimensional (1-D) vertical model. We prescribe the lapse rate in the convective layer as that of the standard troposphere. The assumption that convection sustains a critical lapse rate was absent in previous studies, which focused on the vertical distribution of climatic variables, since such a convective adjustment reduces the degrees of freedom of the system and may prevent the application of the maximum entropy production (MEP) principle. This is not the case in the radiative–convective model (RCM) developed here, since we accept a discontinuity of temperatures at the surface similar to that adopted in many RCMs. For current conditions, the MEP state gives a difference between the ground temperature and the air temperature at the surface ≈10 K. In comparison, conventional RCMs obtain a discontinuity ≈2 K only. However, the surface boundary layer velocity in the MEP state appears reasonable (≈3 m s-¹). Moreover, although the convective flux at the surface in MEP states is almost uniform in optically thick atmospheres, it reaches a maximum value for an optical thickness similar to current conditions. This additional result may support the maximum convection hypothesis suggested by Paltridge (1978)
Resumo:
A detailed mathematical analysis on the q = 1/2 non-extensive maximum entropydistribution of Tsallis' is undertaken. The analysis is based upon the splitting of such adistribution into two orthogonal components. One of the components corresponds to theminimum norm solution of the problem posed by the fulfillment of the a priori conditionson the given expectation values. The remaining component takes care of the normalizationconstraint and is the projection of a constant onto the Null space of the "expectation-values-transformation"
Resumo:
We show how certain N-dimensional dynamical systems are able to exploit the full instability capabilities of their fixed points to do Hopf bifurcations and how such a behavior produces complex time evolutions based on the nonlinear combination of the oscillation modes that emerged from these bifurcations. For really different oscillation frequencies, the evolutions describe robust wave form structures, usually periodic, in which selfsimilarity with respect to both the time scale and system dimension is clearly appreciated. For closer frequencies, the evolution signals usually appear irregular but are still based on the repetition of complex wave form structures. The study is developed by considering vector fields with a scalar-valued nonlinear function of a single variable that is a linear combination of the N dynamical variables. In this case, the linear stability analysis can be used to design N-dimensional systems in which the fixed points of a saddle-node pair experience up to N21 Hopf bifurcations with preselected oscillation frequencies. The secondary processes occurring in the phase region where the variety of limit cycles appear may be rather complex and difficult to characterize, but they produce the nonlinear mixing of oscillation modes with relatively generic features
Resumo:
The paper is devoted to the study of a type of differential systems which appear usually in the study of some Hamiltonian systems with 2 degrees of freedom. We prove the existence of infinitely many periodic orbits on each negative energy level. All these periodic orbits pass near the total collision. Finally we apply these results to study the existence of periodic orbits in the charged collinear 3–body problem.
Resumo:
This paper analyzes the linkages between the credibility of a target zone regime, the volatility of the exchange rate, and the width of the band where the exchange rate is allowed to fluctuate. These three concepts should be related since the band width induces a trade-off between credibility and volatility. Narrower bands should give less scope for the exchange rate to fluctuate but may make agents perceive a larger probability of realignment which by itself should increase the volatility of the exchange rate. We build a model where this trade-off is made explicit. The model is used to understand the reduction in volatility experienced by most EMS countries after their target zones were widened on August 1993. As a natural extension, the model also rationalizes the existence of non-official, implicit target zones (or fear of floating), suggested by some authors.
Resumo:
This paper provides empirical evidence that continuous time models with one factor of volatility, in some conditions, are able to fit the main characteristics of financial data. It also reports the importance of the feedback factor in capturing the strong volatility clustering of data, caused by a possible change in the pattern of volatility in the last part of the sample. We use the Efficient Method of Moments (EMM) by Gallant and Tauchen (1996) to estimate logarithmic models with one and two stochastic volatility factors (with and without feedback) and to select among them.
Resumo:
For the many-to-one matching model in which firms have substitutable and quota q-separable preferences over subsets of workers we show that the workers-optimal stable mechanism is group strategy-proof for the workers. In order to prove this result, we also show that under this domain of preferences (which contains the domain of responsive preferences of the college admissions problem) the workers-optimal stable matching is weakly Pareto optimal for the workers and the Blocking Lemma holds as well. We exhibit an example showing that none of these three results remain true if the preferences of firms are substitutable but not quota q-separable.
Resumo:
Ever since the appearance of the ARCH model [Engle(1982a)], an impressive array of variance specifications belonging to the same class of models has emerged [i.e. Bollerslev's (1986) GARCH; Nelson's (1990) EGARCH]. This recent domain has achieved very successful developments. Nevertheless, several empirical studies seem to show that the performance of such models is not always appropriate [Boulier(1992)]. In this paper we propose a new specification: the Quadratic Moving Average Conditional heteroskedasticity model. Its statistical properties, such as the kurtosis and the symmetry, as well as two estimators (Method of Moments and Maximum Likelihood) are studied. Two statistical tests are presented, the first one tests for homoskedasticity and the second one, discriminates between ARCH and QMACH specification. A Monte Carlo study is presented in order to illustrate some of the theoretical results. An empirical study is undertaken for the DM-US exchange rate.
Resumo:
The Hausman (1978) test is based on the vector of differences of two estimators. It is usually assumed that one of the estimators is fully efficient, since this simplifies calculation of the test statistic. However, this assumption limits the applicability of the test, since widely used estimators such as the generalized method of moments (GMM) or quasi maximum likelihood (QML) are often not fully efficient. This paper shows that the test may easily be implemented, using well-known methods, when neither estimator is efficient. To illustrate, we present both simulation results as well as empirical results for utilization of health care services.
Resumo:
We present a Search and Matching model with heterogeneous workers (entrants and incumbents) that replicates the stylized facts characterizing the US and the Spanish labor markets. Under this benchmark, we find the Post-Match Labor Turnover Costs (PMLTC) to be the centerpiece to explain why the Spanish labor market is as volatile as the US one. The two driving forces governing this volatility are the gaps between entrants and incumbents in terms of separation costs and productivity. We use the model to analyze the cyclical implications of changes in labor market institutions affecting these two gaps. The scenario with a low degree of workers heterogeneity illustrates its suitability to understand why the Spanish labor market has become as volatile as the US one.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Pensado en un ámbito de tecnología médica, el propósito principal es el de realizar un seguido de medidas en diferentes partes del cuerpo para establecer unos valores máximos que nos permitan detectar cuando un paciente empieza a padecer estrés. Para ello se necesita un proceso de medición y otro de transmisión de los datos. Es aquí donde aparece el trabajo realizado en el proyecto. “ZigBee aplicado a la transmisión de datos de sensores biomédicos” está pensado para realizar la tarea de transmisión de los datos desde que el sensor realiza la medida hasta que los datos son monitorizados y almacenados. En la memoria del proyecto podremos encontrar el estudio realizado al medio de transmisión inalámbrico utilizado (ZigBee), el análisis del kit eZ430-RF2500 compatible con el medio, y finalmente la implementación del proyecto. Todo este trabajo finalizará con la recepción satisfactoria de los datos medidos por nuestro sensor biomédico (oxímetro) en el aplicativo personal programado con Visual Basic.