957 resultados para simple game
Resumo:
Esta obra analiza la actitud que el demandado debe tomar frente a la demanda y sus pretensiones en el moderno proceso civil, desde la perspectiva doctrinario y legal ecuatoriana, y la comparada, especialmente del derecho procesal latinoamericano. Con base en ello y en el análisis de la ambigua jurisprudencia ecuatoriana referente al tema redescubre y define la verdadera naturaleza jurídica de la negativa simple y llana de los fundamentos de la demanda, por años malentendido en la costumbre judicial, y describe el alcance y tratamiento que el consignarla tiene en los operadores de justicia ecuatorianos. Posteriormente, examina el enfoque y las tendencias actuales respecto de la consideración de la negativa en el proceso civil contemporáneo, principalmente respecto de la carga de la prueba ante su enunciado y la consecuente aplicación del principio de congruencia en los fallos que la analizan, de modo que estos consoliden la seguridad jurídica constitucionalmente garantizada, recalcando la urgente necesidad del cambio de paradigmas en el derecho procesal civil ecuatoriano y en sus usuarios. El estudio constituye una excelente fuente de consulta para los abogados al momento de elegir y ejecutar la estrategia idónea y debidamente fundamentada de defensa para sus clientes.
Resumo:
This Working Document by Daniel Gros presents a simple model that incorporates two types of sovereign default cost: first, a lump-sum cost due to the fact that the country does not service its debt fully and is recognised as being in default status, by ratings agencies, for example. Second, a cost that increases with the size of the losses (or haircut) imposed on creditors whose resistance to a haircut increases with the proportional loss inflicted upon them. One immediate implication of the model is that under some circumstances the creditors have a (collective) interest to forgive some debt in order to induce the country not to default. The model exhibits a potential for multiple equilibria, given that a higher interest rate charged by investors increases the debt service burden and thus the temptation to default. Under very high debt levels credit rationing can set in as the feedback loop between higher interest rates and the higher incentive to default can become explosive. The introduction of uncertainty makes multiple equilibria less likely and reduces their range.
Resumo:
The euro area today consists of a competitive, moderately leveraged North and an uncompetitive, over-indebted South. Its main macroeconomic challenge is to carry out the adjustment required to restore the competitiveness of its southern part and eliminate its excessive public and private debt burden. This paper investigates the relationship between fiscal and competitiveness adjustment in a stylised model with two countries in a monetary union, North and South. To restore competitiveness, South implements a more restrictive fiscal policy than North. We consider two scenarios. In the first, monetary policy aims at keeping inflation constant in the North. The South therefore needs to deflate to regain competitiveness, which worsens the debt dynamics. In the second, monetary policy aims at keeping inflation constant in the monetary union as a whole. This results in more monetary stimulus, inflation in the North is higher, and this in turn helps the debt dynamics in the South. Our main findings are: •The differential fiscal stance between North and South is what determines real exchange rate changes. South therefore needs to tighten more. There is no escape from relative austerity. •If monetary policy aims at keeping inflation stable in the North and the initial debt is above a certain threshold, debt dynamics are perverse: fiscal retrenchment is self-defeating; •If monetary policy targets average inflation instead, which implies higher inflation in the North, the initial debt threshold above which the debt dynamics become perverse is higher. Accepting more inflation at home is therefore a way for the North to contribute to restoring debt sustainability in the South. •Structural reforms in the South improve the debt dynamics if the initial debt is not too high. Again, targeting average inflation rather than inflation in the North helps strengthen the favourable effects of structural reforms.
Resumo:
In the first year and a half of its existence, the EEAS and its head have become the target of extensive criticism for the shortcomings of EU foreign policy; shortcomings that in fact date back to the creation of the European Union. The EU’s diplomatic service has been blamed variously for ‘lacking clarity,’ ‘acting too slowly’ and ‘being unable to bridge the institutional divide’. In this Commentary author Hrant Kostanyan argues that the EEAS’ discretionary power in the Eastern Partnership multilateral framework is restricted by the decision-making procedures between a wide range of stakeholders: the member states and the partner countries, as well as by the EU institutions, international organisations and the Civil Society Forum. Since this decision-making process places a substantial number of brakes on the discretionary power of the EEAS, any responsible analysis or critique of the service should take these constraints into consideration. Ultimately, the EEAS is only able to craft EU foreign policy insofar as it is allowed to do so.
Resumo:
In an attempt to understand why the Greek economy is collapsing, this Commentary points out two key aspects that are often overlooked – the country’s large multiplier and a bad export performance. When combined with the need for a large fiscal adjustment, these factors help explain how fiscal consolidation in Greece has been associated with such a large drop in GDP.
Resumo:
The paper reports an interactive tool for calibrating a camera, suitable for use in outdoor scenes. The motivation for the tool was the need to obtain an approximate calibration for images taken with no explicit calibration data. Such images are frequently presented to research laboratories, especially in surveillance applications, with a request to demonstrate algorithms. The method decomposes the calibration parameters into intuitively simple components, and relies on the operator interactively adjusting the parameter settings to achieve a visually acceptable agreement between a rectilinear calibration model and his own perception of the scene. Using the tool, we have been able to calibrate images of unknown scenes, taken with unknown cameras, in a matter of minutes. The standard of calibration has proved to be sufficient for model-based pose recovery and tracking of vehicles.
Resumo:
The Turing Test, originally configured for a human to distinguish between an unseen man and unseen woman through a text-based conversational measure of gender, is the ultimate test for thinking. So conceived Alan Turing when he replaced the woman with a machine. His assertion, that once a machine deceived a human judge into believing that they were the human, then that machine should be attributed with intelligence. But is the Turing Test nothing more than a mindless game? We present results from recent Loebner Prizes, a platform for the Turing Test, and find that machines in the contest appear conversationally worse rather than better, from 2004 to 2006, showing a downward trend in highest scores awarded to them by human judges. Thus the machines are not thinking in the same way as a human intelligent entity would.
Resumo:
There are at least three distinct time scales that are relevant for the evolution of atmospheric convection. These are the time scale of the forcing mechanism, the time scale governing the response to a steady forcing, and the time scale of the response to variations in the forcing. The last of these, tmem, is associated with convective life cycles, which provide an element of memory in the system. A highly simplified model of convection is introduced, which allows for investigation of the character of convection as a function of the three time scales. For short tmem, the convective response is strongly tied to the forcing as in conventional equilibrium parameterization. For long tmem, the convection responds only to the slowly evolving component of forcing, and any fluctuations in the forcing are essentially suppressed. At intermediate tmem, convection becomes less predictable: conventional equilibrium closure breaks down and current levels of convection modify the subsequent response.
Resumo:
Data assimilation is a sophisticated mathematical technique for combining observational data with model predictions to produce state and parameter estimates that most accurately approximate the current and future states of the true system. The technique is commonly used in atmospheric and oceanic modelling, combining empirical observations with model predictions to produce more accurate and well-calibrated forecasts. Here, we consider a novel application within a coastal environment and describe how the method can also be used to deliver improved estimates of uncertain morphodynamic model parameters. This is achieved using a technique known as state augmentation. Earlier applications of state augmentation have typically employed the 4D-Var, Kalman filter or ensemble Kalman filter assimilation schemes. Our new method is based on a computationally inexpensive 3D-Var scheme, where the specification of the error covariance matrices is crucial for success. A simple 1D model of bed-form propagation is used to demonstrate the method. The scheme is capable of recovering near-perfect parameter values and, therefore, improves the capability of our model to predict future bathymetry. Such positive results suggest the potential for application to more complex morphodynamic models.
Resumo:
We report on a numerical study of the impact of short, fast inertia-gravity waves on the large-scale, slowly-evolving flow with which they co-exist. A nonlinear quasi-geostrophic numerical model of a stratified shear flow is used to simulate, at reasonably high resolution, the evolution of a large-scale mode which grows due to baroclinic instability and equilibrates at finite amplitude. Ageostrophic inertia-gravity modes are filtered out of the model by construction, but their effects on the balanced flow are incorporated using a simple stochastic parameterization of the potential vorticity anomalies which they induce. The model simulates a rotating, two-layer annulus laboratory experiment, in which we recently observed systematic inertia-gravity wave generation by an evolving, large-scale flow. We find that the impact of the small-amplitude stochastic contribution to the potential vorticity tendency, on the model balanced flow, is generally small, as expected. In certain circumstances, however, the parameterized fast waves can exert a dominant influence. In a flow which is baroclinically-unstable to a range of zonal wavenumbers, and in which there is a close match between the growth rates of the multiple modes, the stochastic waves can strongly affect wavenumber selection. This is illustrated by a flow in which the parameterized fast modes dramatically re-partition the probability-density function for equilibrated large-scale zonal wavenumber. In a second case study, the stochastic perturbations are shown to force spontaneous wavenumber transitions in the large-scale flow, which do not occur in their absence. These phenomena are due to a stochastic resonance effect. They add to the evidence that deterministic parameterizations in general circulation models, of subgrid-scale processes such as gravity wave drag, cannot always adequately capture the full details of the nonlinear interaction.
Resumo:
In the Eady model, where the meridional potential vorticity (PV) gradient is zero, perturbation energy growth can be partitioned cleanly into three mechanisms: (i) shear instability, (ii) resonance, and (iii) the Orr mechanism. Shear instability involves two-way interaction between Rossby edge waves on the ground and lid, resonance occurs as interior PV anomalies excite the edge waves, and the Orr mechanism involves only interior PV anomalies. These mechanisms have distinct implications for the structural and temporal linear evolution of perturbations. Here, a new framework is developed in which the same mechanisms can be distinguished for growth on basic states with nonzero interior PV gradients. It is further shown that the evolution from quite general initial conditions can be accurately described (peak error in perturbation total energy typically less than 10%) by a reduced system that involves only three Rossby wave components. Two of these are counterpropagating Rossby waves—that is, generalizations of the Rossby edge waves when the interior PV gradient is nonzero—whereas the other component depends on the structure of the initial condition and its PV is advected passively with the shear flow. In the cases considered, the three-component model outperforms approximate solutions based on truncating a modal or singular vector basis.
Resumo:
The Phosphorus Indicators Tool provides a catchment-scale estimation of diffuse phosphorus (P) loss from agricultural land to surface waters using the most appropriate indicators of P loss. The Tool provides a framework that may be applied across the UK to estimate P loss, which is sensitive not only to land use and management but also to environmental factors such as climate, soil type and topography. The model complexity incorporated in the P Indicators Tool has been adapted to the level of detail in the available data and the need to reflect the impact of changes in agriculture. Currently, the Tool runs on an annual timestep and at a 1 km(2) grid scale. We demonstrate that the P Indicators Tool works in principle and that its modular structure provides a means of accounting for P loss from one layer to the next, and ultimately to receiving waters. Trial runs of the Tool suggest that modelled P delivery to water approximates measured water quality records. The transparency of the structure of the P Indicators Tool means that identification of poorly performing coefficients is possible, and further refinements of the Tool can be made to ensure it is better calibrated and subsequently validated against empirical data, as it becomes available.
Resumo:
This paper discusses the dangers inherent in allempting to simplify something as complex as development. It does this by exploring the Lynn and Vanhanen theory of deterministic development which asserts that varying levels of economic development seen between countries can be explained by differences in 'national intelligence' (national IQ). Assuming that intelligence is genetically determined, and as different races have been shown to have different IQ, then they argue that economic development (measured as GDP/capita) is largely a function of race and interventions to address imbalances can only have a limited impact. The paper presents the Lynne and Vanhanen case and critically discusses the data and analyses (linear regression) upon which it is based. It also extends the cause-effect basis of Lynne and Vanhanen's theory for economic development into human development by using the Human Development Index (HDI). It is argued that while there is nothing mathematically incorrect with their calculations, there are concerns over the data they employ. Even more fundamentally it is argued that statistically significant correlations between the various components of the HDI and national IQ can occur via a host of cause-effect pathways, and hence the genetic determinism theory is far from proven. The paper ends by discussing the dangers involved in the use of over-simplistic measures of development as a means of exploring cause-effect relationships. While the creators of development indices such as the HDI have good intentions, simplistic indices can encourage simplistic explanations of under-development. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters average soil thermal conductivity, specific beat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated) are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981-August 1990) were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R-2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R-2 -values of the testing period were between 0.87 and 0.94 at a depth of 20cm. and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means, that the model is suitable for addition to catchment scale models.