991 resultados para temporal logic programming


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An overview of programming and software development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transmission network planning problem is a non-linear integer mixed programming problem (NLIMP). Most of the algorithms used to solve this problem use a linear programming subroutine (LP) to solve LP problems resulting from planning algorithms. Sometimes the resolution of these LPs represents a major computational effort. The particularity of these LPs in the optimal solution is that only some inequality constraints are binding. This task transforms the LP into an equivalent problem with only one equality constraint (the power flow equation) and many inequality constraints, and uses a dual simplex algorithm and a relaxation strategy to solve the LPs. The optimisation process is started with only one equality constraint and, in each step, the most unfeasible constraint is added. The logic used is similar to a proposal for electric systems operation planning. The results show a higher performance of the algorithm when compared to primal simplex methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work seeks to explain the historical perspective of the building temporal western, showing the concepts of morphogenesis and morphodynamics as contemporary representations of geographical science for the synthesis forged about time. In this sense, it is the perspective of time in the Middle Ages, its implications and social substantiality, as interposed to time erected by the rise of a merchant class in Europe, we seek to present the social conception of this category in the West, stating that this not constituted as a supra-social element, but built by the organization itself and internal contradiction of European society. Finally, we saw the delineations drawn by geography, set in propositions about the dynamics of nature and society, such being the latest concepts in terms of the logic of assimilation weather prevailing. For this, we use literature review and comparison of temporal models at different times for the delineation of the contours of the research... (Complete abstract click electronic access below)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article purposes the ARBot, a system that has as main objective the presentation of concepts of logic for students of elementary and secondary education. The system was developed using the technology known as Augmented Reality (AR), which allows complement the actual environment where the user is, by adding virtual objects. In this scenario the RA created from a virtual game interface is used, through which cognitive challenges are presented. To solve these challenges, users must set up three-dimensional virtual characters using visual language. As a result it follows that, in a playful way, concepts of algorithms and programming are assimilated by users. In addition, the system enables two users to interact in a cooperative game mode. In cooperative mode, the system focuses on collaborative learning, since it allows users to jointly solve the cognitive challenge presented by the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lucia jig is a technique that promotes neuromuscular reprogramming of the masticatory system and allows the stabilization of the mandible without the interference of dental contacts, maintaining the mandible position in harmonic condition with the musculature in normal subjects or in patients with temporomandibular dysfunction (TMD). This study aimed to electromyographically analyze the activity (RMS) of the masseter and temporal muscles in normal subjects (control group) during the use of an anterior programming device, the Lucia jig, in place for 0, 5, 10, 20 and 30 minutes to demonstrate its effect on the stomatognathic system. Forty-two healthy dentate individuals (aged 21 to 40 years) with normal occlusion and without parafunctional habits or ternporomandibular dysfunction (RDC/TMD) were evaluated on the basis of the electromyographic activity of the masseter and temporal muscles before placement of a neuromuscular re-programming device, the Lucia jig, on the upper central incisors. There were no statistically significant differences (p < 0.05) in the electromyographic activity of the masticatory muscles in the different time periods. The Lucia jig changed the electromyographic activity by promoting a neuromuscular reprogramming. In most of the time periods, it decreased the activation of the masticatory muscles, showing that this device has wide applicability in dentistry. The use of a Lucia jig over 0, 5, 10, 15, 20 and 30 minutes did not promote any statistically significant increase in muscle activity despite differences in the data, thus showing that this intra-oral device can be used in dentistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To report the results of cochlear implantation via the middle fossa approach in 4 patients, discuss the complications, and present a detailed description of the programming specifications in these cases. Study Design: Retrospective case review. Setting: Tertiary-care referral center with a well-established cochlear implant program. Patients: Four patients with bilateral canal wall down mastoid cavities who underwent the middle fossa approach for cochlear implantation. Interventions: Cochlear implantation and subsequent rehabilitation. A middle fossa approach with cochleostomy was successfully performed on the most superficial part of the apical turn in 4 patients. A Nucleus 24 cochlear implant system was used in 3 patients and a MED-EL Sonata Medium device in 1 patient. The single electrode array was inserted through a cochleostomy from the cochlear apex and occupied the apical, middle, and basal turns. Telemetry and intraoperative impedance recordings were performed at the end of surgery. A CT scan of the temporal bones was performed to document electrode insertion for all of the patients. Main Outcome Measures: Complications, hearing thresholds, and speech perception outcomes were evaluated. Results: Neural response telemetry showed present responses in all but 1 patient, who demonstrated facial nerve stimulation during the test. Open-set speech perception varied from 30% to 100%, despite the frequency allocation order of the MAP. Conclusion: Cochlear implantation via the middle cranial fossa is a safe approach, although it is a challenging procedure, even for experienced surgeons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an analysis for detecting procedures and goals that are deterministic (i.e. that produce at most one solution), or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic (because they cali other predicates that can produce more than one solution). Applications of such determinacy information include detecting programming errors, performing certain high-level program transformations for improving search efñciency, optimizing low level code generation and parallel execution, and estimating tighter upper bounds on the computational costs of goals and data sizes, which can be used for program debugging, resource consumption and granularity control, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efncient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract is not available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract is not available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an advanced program development environment, such as that discussed in the introduction of this book, several tools may coexist which handle both the program and information on the program in different ways. Also, these tools may interact among themselves and with the user. Thus, the different tools and the user need some way to communicate. It is our design principie that such communication be performed in terms of assertions. Assertions are syntactic objects which allow expressing properties of programs. Several assertion languages have been used in the past in different contexts, mainly related to program debugging. In this chapter we propose a general language of assertions which is used in different tools for validation and debugging of constraint logic programs in the context of the DiSCiPl project. The assertion language proposed is parametric w.r.t. the particular constraint domain and properties of interest being used in each different tool. The language proposed is quite general in that it poses few restrictions on the kind of properties which may be expressed. We believe the assertion language we propose is of practical relevance and appropriate for the different uses required in the tools considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.