918 resultados para election observers
Resumo:
Genetic Programming can be effectively used to create emergent behavior for a group of autonomous agents. In the process we call Offline Emergence Engineering, the behavior is at first bred in a Genetic Programming environment and then deployed to the agents in the real environment. In this article we shortly describe our approach, introduce an extended behavioral rule syntax, and discuss the impact of the expressiveness of the behavioral description to the generation success, using two scenarios in comparison: the election problem and the distributed critical section problem. We evaluate the results, formulating criteria for the applicability of our approach.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
The design, reformulation, and final signing of Plan Colombia by the then US President, Bill Clinton, on the 13 July 2000 initiated in a new era of the US State´s involvement in supposedly sovereign-territorial issues of Colombian politics. The implementation of Plan Colombia there-on-after brought about a major realignment of political-military scales and terrains of conflict that have renewed discourses concerning the contemporary imperialist interests of key US-based but transnationally-projected social forces, leading to arguments that stress the invigorated geo-political dimension of present-day strategies of capitalist accumulation. With the election of Álvaro Uribe Vélez as Colombian President in May 2002 and his pledge to strengthen the national military campaign aganist the region´s longest-surviving insurgency guerrilla group, Las FARC-EP, as well as other guerrilla factions, combined with a new focus on establishing the State project of “Democratic Security”; the military realm of governance and attempts to ensure property security and expanding capitalist investment have attained precedence in Colombia´s national political domains. This working paper examines the interrelated nature of Plan Colombia -as a binational and indeed regional security strategy- and Uribe´s Democratic Security project as a means of showing the manner in which they have worked to pave the way for the implementation of a new “total market” regime of accumulation, based on large-scale agro-industrial investment which is accelerated through processes of accumulation via dispossession. As such, the political and social reconfigurations involved manifest the multifarious scales of governance that become intertwined in incorporating neoliberalism in specific regions of the world economy. Furthermore, the militarisation-securitisation of such policies also illustrate the explicit contradictions of neoliberalism in a peripheral context, where coercion seems to prevail, something which leads to a profound questioning of the extent to which neoliberalism can be thought of as a hegemonic politico-economic project.
Resumo:
Low perceptual familiarity with relatively rarer left-handed as opposed to more common right-handed individuals may result in athletes' poorer ability to anticipate the former's action intentions. Part of such left-right asymmetry in visual anticipation could be due to an inefficient gaze strategy during confrontation with left-handed individuals. To exemplify, observers may not mirror their gaze when viewing left- vs. right-handed actions but preferentially fixate on an opponent's right body side, irrespective of an opponent's handedness, owing to the predominant exposure to right-handed actions. So far empirical verification of such assumption, however, is lacking. Here we report on an experiment where team-handball goalkeepers' and non-goalkeepers' gaze behavior was recorded while they predicted throw direction of left- and right-handed 7-m penalties shown as videos on a computer monitor. As expected, goalkeepers were considerably more accurate than non-goalkeepers and prediction was better against right- than left-handed penalties. However, there was no indication of differences in gaze measures (i.e., number of fixations, overall and final fixation duration, time-course of horizontal or vertical fixation deviation) as a function of skill group or the penalty-takers' handedness. Findings suggest that inferior anticipation of left-handed compared to right-handed individuals' action intentions may not be associated with misalignment in gaze behavior. Rather, albeit looking similarly, accuracy differences could be due to observers' differential ability of picking up and interpreting the visual information provided by left- vs. right-handed movements.
Resumo:
This thesis takes an interdisciplinary approach to the study of color vision, focussing on the phenomenon of color constancy formulated as a computational problem. The primary contributions of the thesis are (1) the demonstration of a formal framework for lightness algorithms; (2) the derivation of a new lightness algorithm based on regularization theory; (3) the synthesis of an adaptive lightness algorithm using "learning" techniques; (4) the development of an image segmentation algorithm that uses luminance and color information to mark material boundaries; and (5) an experimental investigation into the cues that human observers use to judge the color of the illuminant. Other computational approaches to color are reviewed and some of their links to psychophysics and physiology are explored.
Resumo:
The biplot has proved to be a powerful descriptive and analytical tool in many areas of applications of statistics. For compositional data the necessary theoretical adaptation has been provided, with illustrative applications, by Aitchison (1990) and Aitchison and Greenacre (2002). These papers were restricted to the interpretation of simple compositional data sets. In many situations the problem has to be described in some form of conditional modelling. For example, in a clinical trial where interest is in how patients’ steroid metabolite compositions may change as a result of different treatment regimes, interest is in relating the compositions after treatment to the compositions before treatment and the nature of the treatments applied. To study this through a biplot technique requires the development of some form of conditional compositional biplot. This is the purpose of this paper. We choose as a motivating application an analysis of the 1992 US President ial Election, where interest may be in how the three-part composition, the percentage division among the three candidates - Bush, Clinton and Perot - of the presidential vote in each state, depends on the ethnic composition and on the urban-rural composition of the state. The methodology of conditional compositional biplots is first developed and a detailed interpretation of the 1992 US Presidential Election provided. We use a second application involving the conditional variability of tektite mineral compositions with respect to major oxide compositions to demonstrate some hazards of simplistic interpretation of biplots. Finally we conjecture on further possible applications of conditional compositional biplots
Resumo:
The application of compositional data analysis through log ratio trans- formations corresponds to a multinomial logit model for the shares themselves. This model is characterized by the property of Independence of Irrelevant Alter- natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactly this invariance of the ratio that underlies the commonly used zero replacement procedure in compositional data analysis. In this paper we investigate using the nested logit model that does not embody IIA and an associated zero replacement procedure and compare its performance with that of the more usual approach of using the multinomial logit model. Our comparisons exploit a data set that com- bines voting data by electoral division with corresponding census data for each division for the 2001 Federal election in Australia
Resumo:
Los niños que presentan dolor abdominal agudo representan una de las principales demandas de atención en los servicios de Urgencias Pediátricas, convirtiéndose en un reto para quien realiza la valoración inicial la decisión de que paciente amerita realizar estudios adicionales, para descartar una patología quirúrgica. Considerando que la apendicitis es la primera emergencia quirúrgica en niños, y que sus complicaciones son un problema frecuente, es imprescindible que el clínico conozca cual es la utilidad y beneficio que le brindan las herramientas diagnósticas disponibles para realizar un diagnóstico más certero. Objetivo: Describir el manejo de los pacientes menores de 18 años con sospecha de apendicitis en la FCI y determinar la sensibilidad y especificidad de la ecografía en esta población. Métodos: Estudio de Prueba Diagnóstica. Revisión expedita de los niños valorados en la FCI con sospecha clínica de apendicitis aguda durante un periodo de 4 meses. Se tomaron datos de hallazgos ecográficos para apendicitis (positivos, negativos o indeterminados) y del desenlace de tener o no apendicitis. Posteriormente se realizó una inferencia mediante una tabla de contingencia, teniendo en cuenta intervalos de confianza para determinar la sensibilidad, especificidad, VPP, VPN y exactitud de la prueba basados en el diagnóstico final. Resultados: En 52% de pacientes que consultaron por dolor abdominal se sospecho patología quirúrgica, grupo en el que se utilizó la ecografía abdominal como herramienta diagnóstica para descartar apendicitis encontrando: sensibilidad: 63%(IC 95%, 48.6-75.5), especificidad: 82,7%(IC 95%, 76- 87.8) y exactitud: 78,2%(IC 95%, 72-83,4). La prevalencia de la enfermedad fue 22,8%, con probabilidad postprueba positiva de 88.3%(IC 95%, 82,1-92.6). Conclusiones: Los resultados de este estudio evidencian un menor rendimiento de la ecografía como prueba diagnóstica para apendicitis en pediatría en nuestro medio comparado al descrito en la literatura. Dicha discordancia probablemente se encuentra determinada por los sesgos propios de un estudio histórico como este, en el que básicamente la falta de uniformidad en el registro de datos en la historia clínica por los observadores (clínicos y radiólogos), así como las limitaciones para determinar el seguimiento de todos los pacientes que no fueron operados, pueden alterar la capacidad operativa real de la prueba. De ahí la importancia de realizar a futuro un estudio concurrente.
Resumo:
Propósito: establecer la variabilidad inter observador en la categorización de las microcalcificaciones mamográficas según la clasificación vigente (BIRADS 4 edicion) entre dos evaluadores mediante un estudio de concordancia de consistencia. Determinar las razones de probabilidad y el valor predictivo positivo de categoría de microcalcificaciones. Materiales y métodos: este estudio incluye 108 placas mamográficas de 107 pacientes , mujeres con edades entre los 31 y 90 años a quienes se les había practicado biopsia guiada con esterotaxia por calcificaciones mamarias en la FCI IC entre noviembre de 2004 a noviembre de 2008 y cuyos registros mamograficos se encontraran disponibles para análisis. Se diseñó una hoja de recolección de datos con la cual dos examinadores calificaban las calcificaciones según se describen en el BIRADS 4 edición sin conocer el resultado de la patología ni la calificación del otro examinador (doble ciego). Posteriormente se aplico un análisis Kappa Ponderado para determinar el nivel de concordancia más allá del azar entre los dos examinadores. Se Calcularon los valores predictivos positivos y las razones de probabilidad para cada categoría de calcificaciones.
Resumo:
El giro que el sistema internacional dio tras los atentados perpetrados el 11 de septiembre de 2001 fue de enorme importancia para la política exterior de Irán y su posicionamiento en el sistema internacional. En efecto, la lucha guerra contra el terrorismo que Estados Unidos inició y que finalmente desembocaría en las incursiones armadas a Afganistán e Irak -ambos países limítrofes de Irán, en 2001 y 2003 respectivamente, así como la inclusión de Irán por parte de la administración Bush en 2002 como parte del denominado Eje del mal, generaron en el imaginario iraní una idea de amenaza internacional y por lo mismo, la necesidad de hacerse fuerte con el fin de mantener su integridad territorial y gubernamental. Así pues, una compleja situación económica y social sumada a una fuerte coalición del partido conservador iraní -uno de los más fuertes del país, soportado por el Ayatolá Alí Jamenei, autoridad suprema de la nueva República, un nuevo líder de corte conservador, llegaría a la presidencia: Mahmoud Ahmadinejad, un líder controversial y problemático que desde su primera campaña presidencial en 2002 defendería el derecho de la República a desarrollar armas nucleares para fines pacíficos. El verdadero inconveniente surge cuando occidente, en cabeza de los Estados Unidos en compañía de algunas potencias europeas como lo son Alemania, Francia y Gran Bretaña, duda sobre esta última afirmación. Esta investigación pretende profundizar cada uno de los elementos anteriormente mencionados así como establecer la forma en la cual China y Rusia países que ven de forma diferenciada el manejo de arsenal nuclear, logran configurarse y acreditar este desafío ante la comunidad internacional.
Resumo:
Introducción: el gold estándar para el error refractivo es la retinoscopía. Los docentes de optometría al evaluar estudiantes, aceptan una diferencia de ±0,50D en la refracción pero no se ha evaluado estadísticamente si es adecuado para ametropías bajas y altas. El objetivo fue cuantificar el grado de concordancia interobservadores en retinoscopía estática entre docentes y estudiantes, para ametropías altas y bajas. Metodología: estudio de concordancia entre 4 observadores en 40 ojos, 20 con ametropías altas y 20 con bajas; muestreo no probabilístico por conveniencia. Análisis estadístico con coeficiente de correlación intraclase, confiabilidad 95%, poder 90%, y con método gráfico de límites de acuerdo al 95%. Resultados: concordancia para el equivalente esférico entre docentes 0,96 y entre estudiantes 0,56. En estudiantes concordancia de 0,89 para defectos refractivos bajos y docentes 0,96 para defectos altos. Concordancia entre cuatro examinadores 0,78, defectos bajos 0,86 y para altos 0,67. Margen de error entre docentes ±0,87D y estudiantes ±3,15D. En defectos bajos ±0,61D para docentes y ±0,80D para estudiantes y en defectos altos ±1,10D y ±4,22D respectivamente. Discusión: hubo mayor confiabilidad en retinoscopía entre profesionales experimentados. Se comparó la concordancia entre docentes y estudiantes, por eso puede haberse encontrado menor concordancia que la descrita por otros estudios que compararon entre profesionales a pesar haber sido elegidos por sus buenas calificaciones. Se deben formular estrategias de enseñanza que permitan reducir los márgenes de error obtenidos y mejorar la concordancia entre docentes y estudiantes.
Resumo:
The speed of fault isolation is crucial for the design and reconfiguration of fault tolerant control (FTC). In this paper the fault isolation problem is stated as a constraint satisfaction problem (CSP) and solved using constraint propagation techniques. The proposed method is based on constraint satisfaction techniques and uncertainty space refining of interval parameters. In comparison with other approaches based on adaptive observers, the major advantage of the presented method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements and model errors and without the monotonicity assumption. In order to illustrate the proposed approach, a case study of a nonlinear dynamic system is presented
Resumo:
This paper deals with fault detection and isolation problems for nonlinear dynamic systems. Both problems are stated as constraint satisfaction problems (CSP) and solved using consistency techniques. The main contribution is the isolation method based on consistency techniques and uncertainty space refining of interval parameters. The major advantage of this method is that the isolation speed is fast even taking into account uncertainty in parameters, measurements, and model errors. Interval calculations bring independence from the assumption of monotony considered by several approaches for fault isolation which are based on observers. An application to a well known alcoholic fermentation process model is presented
Resumo:
What are fundamental entities in social networks and what information is contained in social graphs? We will discuss some selected concepts in social network analysis, such as one- and two mode networks, prestige and centrality, and cliques, clans and clubs. Readings: Web tool predicts election results and stock prices, J. Palmer, New Scientist, 07 February (2008) [Protected Access] Optional: Social Network Analysis, Methods and Applications, S. Wasserman and K. Faust (1994)
Resumo:
Objetivos : evaluar las características operativas del examen físico en el diagnóstico de neumonía y evaluar su acuerdo inter-observador. Marco de referencia : los estudios que avaluaron al examen físico como prueba diagnóstica en neumonía son metodológicamente deficientes. Diseño : estudio ciego de corte transversal para evaluación de prueba diagnóstica. Pacientes : adultos quienes consultan al servicio de urgencias y hospitalización de la FCI por síntomas respiratorios agudos o exacerbación de los mismos. Mediciones : examen físico por dos observadores independientes, toma de radiografía de tórax y lectura por radiólogo experto. Se tomaron los datos que permitieron calcular el índice de severidad de neumonía (PSI). Resultados : de 198 pacientes, 85(42%) tenían neumonía radiográficamente. Las características operativas del examinador1 fueron: Sensibilidad:63.2%, Especificidad:54,1%, LR(+)=1,36, LR(-)=0,68; para el examinador2: Sensibilidad:34,3%, Especificidad:71,7%, LR(+)=1,17, LR(-)=0,92. La correlación entre diagnóstico clínico para derrame pleural fue k=-0,052, no significativa (p=0,445); y para neumonía k=0.25 significativa (p=0.022). Al medirse la severidad de neumonía por PSI, la sensibilidad aumento estratificada a severidad (II:Sensibilidad:40%; III:Sensibilidad: 57%; IV:Sensibilidad;75%; V:Sensibilidad:80%). Conclusiones : el examen físico no es sensible ni especifico en el diagnóstico de neumonía. Existe un índice de acuerdo débil en el examen físico de tórax para el diagnóstico de derrame pleural y neumonía Es más probable el diagnóstico clínico de neumonía al aumentar la severidad por PSI.