998 resultados para variable-stepsize implementation
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
Bonuses – which are often used to mitigate principal-agent problems and to encourage employees to work harder – have increased tremendously in the financial sector during the last decade, and have often been seen as a contributing factor to the financial crisis of 2008. The recent European Union (EU) action to adopt a policy that restricts bonuses paid to bankers may seem promising at first, but this does not address the real issues behind variable rewards. Compensation policies should be changed to encourage responsible risk-taking and decision-making through the implementation of broader performance metrics, forfeitable holdbacks and hybrid bonds. Furthermore, a change in organisational culture is needed to improve ethical behaviour leading to a re-balancing of stakeholders’ interests in the financial sector.
Resumo:
We describe a generalization of the cluster-state model of quantum computation to continuous-variable systems, along with a proposal for an optical implementation using squeezed-light sources, linear optics, and homodyne detection. For universal quantum computation, a nonlinear element is required. This can be satisfied by adding to the toolbox any single-mode non-Gaussian measurement, while the initial cluster state itself remains Gaussian. Homodyne detection alone suffices to perform an arbitrary multimode Gaussian transformation via the cluster state. We also propose an experiment to demonstrate cluster-based error reduction when implementing Gaussian operations.
Resumo:
We review the field of quantum optical information from elementary considerations to quantum computation schemes. We illustrate our discussion with descriptions of experimental demonstrations of key communication and processing tasks from the last decade and also look forward to the key results likely in the next decade. We examine both discrete (single photon) type processing as well as those which employ continuous variable manipulations. The mathematical formalism is kept to the minimum needed to understand the key theoretical and experimental results.
Resumo:
An iterative Monte Carlo algorithm for evaluating linear functionals of the solution of integral equations with polynomial non-linearity is proposed and studied. The method uses a simulation of branching stochastic processes. It is proved that the mathematical expectation of the introduced random variable is equal to a linear functional of the solution. The algorithm uses the so-called almost optimal density function. Numerical examples are considered. Parallel implementation of the algorithm is also realized using the package ATHAPASCAN as an environment for parallel realization.The computational results demonstrate high parallel efficiency of the presented algorithm and give a good solution when almost optimal density function is used as a transition density.
Resumo:
The “Nash program” initiated by Nash (Econometrica 21:128–140, 1953) is a research agenda aiming at representing every axiomatically determined cooperative solution to a game as a Nash outcome of a reasonable noncooperative bargaining game. The L-Nash solution first defined by Forgó (Interactive Decisions. Lecture Notes in Economics and Mathematical Systems, vol 229. Springer, Berlin, pp 1–15, 1983) is obtained as the limiting point of the Nash bargaining solution when the disagreement point goes to negative infinity in a fixed direction. In Forgó and Szidarovszky (Eur J Oper Res 147:108–116, 2003), the L-Nash solution was related to the solution of multiciteria decision making and two different axiomatizations of the L-Nash solution were also given in this context. In this paper, finite bounds are established for the penalty of disagreement in certain special two-person bargaining problems, making it possible to apply all the implementation models designed for Nash bargaining problems with a finite disagreement point to obtain the L-Nash solution as well. For another set of problems where this method does not work, a version of Rubinstein’s alternative offer game (Econometrica 50:97–109, 1982) is shown to asymptotically implement the L-Nash solution. If penalty is internalized as a decision variable of one of the players, then a modification of Howard’s game (J Econ Theory 56:142–159, 1992) also implements the L-Nash solution.
Resumo:
Policy/program implementation, e.g., the process of fulfilling policy/program directives, is fundamentally tied to change. Implementation studies have examined the process, identifying many critical organizational variables although individuals perform the activities.^ Many of the studies are predicated on the rational, goal oriented model of organizations and examine implementation, presenting only the goal-oriented view. Organizational change and its resistance, however, are not fully explained by the rational model of organizations. There are other schools of thought providing different views of organizations from which explanation may emerge. Bolman and Deal (1984, 1991a, 1994) provide a different perspective for examining organizations Bolman and Deal argue organizations should be viewed through four different frames or lenses. Framing and reframing organizational action captures the complexity of action and provides better understanding of organizational processes. Understanding of implementation of policies/programs also will benefit from the use of the four-frame approach.^ The goal of this research is to provide a better understanding of the implementation process by examining individual attitudes toward change, the dependent variable of this research, and studying the relationship between the dependent variable and frame. The research was conducted in two phases. In Phase One, a survey was sent to 306 school administrators and teachers in magnet programs in Dade County, Florida. The survey instrument was composed of 55 questions including six from Bolman and Deal's Leadership Orientation Survey (1988) and 38 questions about organizational change. In Phase Two, more in-depth analysis of four school was conducted, to further explore the relationship between frame and attitude toward change.^ The results revealed that frame was a factor in explaining differences in personal Attitude Toward Change and Comfort Level with Change. Individuals using the symbolic frame had more positive attitudes toward change and were also more comfortable with change. The results of Phase Two of the research partially supported this finding in that the most fully implemented program was the product of an administrator who had chosen the symbolic frame. ^
Resumo:
1. Genomewide association studies (GWAS) enable detailed dissections of the genetic basis for organisms' ability to adapt to a changing environment. In long-term studies of natural populations, individuals are often marked at one point in their life and then repeatedly recaptured. It is therefore essential that a method for GWAS includes the process of repeated sampling. In a GWAS, the effects of thousands of single-nucleotide polymorphisms (SNPs) need to be fitted and any model development is constrained by the computational requirements. A method is therefore required that can fit a highly hierarchical model and at the same time is computationally fast enough to be useful. 2. Our method fits fixed SNP effects in a linear mixed model that can include both random polygenic effects and permanent environmental effects. In this way, the model can correct for population structure and model repeated measures. The covariance structure of the linear mixed model is first estimated and subsequently used in a generalized least squares setting to fit the SNP effects. The method was evaluated in a simulation study based on observed genotypes from a long-term study of collared flycatchers in Sweden. 3. The method we present here was successful in estimating permanent environmental effects from simulated repeated measures data. Additionally, we found that especially for variable phenotypes having large variation between years, the repeated measurements model has a substantial increase in power compared to a model using average phenotypes as a response. 4. The method is available in the R package RepeatABEL. It increases the power in GWAS having repeated measures, especially for long-term studies of natural populations, and the R implementation is expected to facilitate modelling of longitudinal data for studies of both animal and human populations.
Resumo:
Traffic demand increases are pushing aging ground transportation infrastructures to their theoretical capacity. The result of this demand is traffic bottlenecks that are a major cause of delay on urban freeways. In addition, the queues associated with those bottlenecks increase the probability of a crash while adversely affecting environmental measures such as emissions and fuel consumption. With limited resources available for network expansion, traffic professionals have developed active traffic management systems (ATMS) in an attempt to mitigate the negative consequences of traffic bottlenecks. Among these ATMS strategies, variable speed limits (VSL) and ramp metering (RM) have been gaining international interests for their potential to improve safety, mobility, and environmental measures at freeway bottlenecks. Though previous studies have shown the tremendous potential of variable speed limit (VSL) and VSL paired with ramp metering (VSLRM) control, little guidance has been developed to assist decision makers in the planning phase of a congestion mitigation project that is considering VSL or VSLRM control. To address this need, this study has developed a comprehensive decision/deployment support tool for the application of VSL and VSLRM control in recurrently congested environments. The decision tool will assist practitioners in deciding the most appropriate control strategy at a candidate site, which candidate sites have the most potential to benefit from the suggested control strategy, and how to most effectively design the field deployment of the suggested control strategy at each implementation site. To do so, the tool is comprised of three key modules, (1) Decision Module, (2) Benefits Module, and (3) Deployment Guidelines Module. Each module uses commonly known traffic flow and geometric parameters as inputs to statistical models and empirically based procedures to provide guidance on the application of VSL and VSLRM at each candidate site. These models and procedures were developed from the outputs of simulated experiments, calibrated with field data. To demonstrate the application of the tool, a list of real-world candidate sites were selected from the Maryland State Highway Administration Mobility Report. Here, field data from each candidate site was input into the tool to illustrate the step-by-step process required for efficient planning of VSL or VSLRM control. The output of the tool includes the suggested control system at each site, a ranking of the sites based on the expected benefit-to-cost ratio, and guidelines on how to deploy the VSL signs, ramp meters, and detectors at the deployment site(s). This research has the potential to assist traffic engineers in the planning of VSL and VSLRM control, thus enhancing the procedure for allocating limited resources for mobility and safety improvements on highways plagued by recurrent congestion.
Resumo:
Global Network for the Molecular Surveillance of Tuberculosis 2010: A. Miranda (Tuberculosis Laboratory of the National Institute of Health, Porto, Portugal)
Resumo:
[ES]Este Trabajo de Fin de Grado (TFG) está relacionado con las prácticas efectuadas en la Guardia Municipal de San Sebastián (País Vasco) y pretende analizar las órdenes de protección concedidas a víctimas de violencia de género inmigrantes y nacionales para comprobar si existe alguna diferencia en cuanto a su aplicación. Asimismo, se pretende analizar el procedimiento efectuado por este cuerpo policial y servicios sociales en estos casos, así como el perfil de la víctima y el agresor. Por otra parte, se procura analizar aspectos controvertidos de la Ley Orgánica 1/2004 de Medidas de Protección Integral de la Violencia de Género. Para ello, se utilizara una metodología mixta. Por una parte de corte cualitativo realizando cuatro entrevistas (a dos víctimas, una agente de policía y una trabajadora social) para ahondar más respecto a este tema. En segundo lugar, desde una perspectiva cuantitativa se explorará la base de datos de la Guardia Municipal en materia de violencia de género para realizar un análisis estadístico. Finalmente, se abordaran las conclusiones a las que se ha llegado con este trabajo y se propondrán mejoras de cara a futuras investigaciones y a la operatividad de la Guardia Municipal.
Resumo:
A variable width pulse generator featuring more than 4-V peak amplitude and less than 10-ns FWHM is described. In this design the width of the pulses is controlled by means of the control signal slope. Thus, a variable transition time control circuit (TTCC) is also developed, based on the charge and discharge of a capacitor by means of two tunable current sources. Additionally, it is possible to activate/deactivate the pulses when required, therefore allowing the creation of any desired pulse pattern. Furthermore, the implementation presented here can be electronically controlled. In conclusion, due to its versatility, compactness and low cost it can be used in a wide variety of applications.
Resumo:
Actualmente encontramos una fuerte presión en las organizaciones por adaptarse a un mundo competitivo con un descenso en las utilidades y una incertidumbre constante en su flujo de caja. Estas circunstancias obligan a las organizaciones al mejoramiento continuo buscando nuevas formas de gestionar sus procesos y sus recursos. Para las organizaciones de prestación de servicios en el sector de telecomunicaciones una de las ventajas competitivas más importantes de obtener es la productividad debido a que sus ganancias dependen directamente del número de actividades que puedan ejecutar cada empleado. El reto es hacer más con menos y con mejor calidad. Para lograrlo, la necesidad de gestionar efectivamente los recursos humanos aparece, y aquí es donde los sistemas de compensación toman un rol importante. El objetivo en este trabajo es diseñar y aplicar un modelo de remuneración variable para una empresa de prestación de servicios profesionales en el sector de las telecomunicaciones y con esto aportar al estudio de la gestión del desempeño y del talento humano en Colombia. Su realización permitió la documentación del diseño y aplicación del modelo de remuneración variable en un proyecto del sector de telecomunicaciones en Colombia. Su diseño utilizó las tendencias de programas remunerativos y teorías de gestión de desempeño para lograr un modelo integral que permita el crecimiento sostenido en el largo plazo y la motivación al recurso más importante de la organización que es el talento humano. Su aplicación permitió también la documentación de problemas y aciertos en la implementación de estos modelos.
Resumo:
Isolated DC-DC converters play a significant role in fast charging and maintaining the variable output voltage for EV applications. This study aims to investigate the different Isolated DC-DC converters for onboard and offboard chargers, then, once the topology is selected, study the control techniques and, finally, achieve a real-time converter model to accomplish Hardware-In-The-Loop (HIL) results. Among the different isolated DC-DC topologies, the Dual Active Bridge (DAB) converter has the advantage of allowing bidirectional power flow, which enables operating in both Grid to Vehicle (G2V) and Vehicle to Grid (V2G) modalities. Recently, DAB has been used in the offboard chargers for high voltage applications due to SiC and GaN MOSFETs; this new technology also allows the utilization of higher switching frequencies. By empowering soft switching techniques to reduce switching losses, higher switching frequency operation is possible in DAB. There are four phase shift control techniques for the DAB converter. They are Single Phase shift, Extended Phase shift, Dual Phase shift, Triple Phase shift controls. This thesis considers two control strategies; Single-Phase, and Dual-Phase shifts, to understand the circulating currents, power losses, and output capacitor size reduction in the DAB. Hardware-In-The-Loop (HIL) experiments are carried out on both controls with high switching frequencies using the PLECS software tool and the RT box supporting the PLECS. Root Mean Square Error is also calculated for steady-state values of output voltage with different sampling frequencies in both the controls to identify the achievable sampling frequency in real-time. DSP implementation is also executed to emulate the optimized DAB converter design, and final real-time simulation results are discussed for both the Single-Phase and Dual-Phase shift controls.
Resumo:
36