991 resultados para Salt Lake City, Utah
Resumo:
La corrupción sigue siendo uno de los principales problemas del Estado de Derecho en el siglo XXI. Su incidencia reduce la eficacia de la inversión, aumenta el valor de los bienes y servicios, reduce la competitividad de las empresas, vulnera la confianza de los ciudadanos en el ordenamiento jurídico y sobre todo condena a la miseria a quienes deben ser destinatarios de las políticas públicas.Sin embrago, la lucha que han realizado muchos gobiernos y funcionarios judiciales contra este fenómeno ha modificado sus formas de aparición, pues es cada vez menos frecuente la apropiación directa de los caudales públicos o la entrega de sobornos a los funcionarios, prefiriéndose métodos mucho más sutiles como los sobrecostos, la subcontratación masiva o la constitución de complicadas sociedades, en las cuales tienen participación los funcionarios públicos o sus familias.Este libro constituye un esfuerzo por el estudio jurídico y criminológico de la corrupción y los delitos contra la administración pública en Europa y Latinoamérica y reúne la selección de los temas penales más relevantes de la tesis doctoral del profesor Carlos Guillermo Castro Cuenca, denominada Aproximación a la Corrupción en la contratación pública y defendida en la universidad de Salamanca en febrero de 2008, con lo cual obtuvo la calificación de sobresaliente por unanimidad.
Resumo:
Adaptive least mean square (LMS) filters with or without training sequences, which are known as training-based and blind detectors respectively, have been formulated to counter interference in CDMA systems. The convergence characteristics of these two LMS detectors are analyzed and compared in this paper. We show that the blind detector is superior to the training-based detector with respect to convergence rate. On the other hand, the training-based detector performs better in the steady state, giving a lower excess mean-square error (MSE) for a given adaptation step size. A novel decision-directed LMS detector which achieves the low excess MSE of the training-based detector and the superior convergence performance of the blind detector is proposed.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.