823 resultados para Boolean Computations
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.
Resumo:
OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)
Resumo:
Abstract Dataflow programs are widely used. Each program is a directed graph where nodes are computations and edges indicate the flow of data. In prior work, we reverse-engineered legacy dataflow programs by deriving their optimized implementations from a simple specification graph using graph transformations called refinements and optimizations. In MDE-speak, our derivations were PIM-to-PSM mappings. In this paper, we show how extensions complement refinements, optimizations, and PIM-to-PSM derivations to make the process of reverse engineering complex legacy dataflow programs tractable. We explain how optional functionality in transformations can be encoded, thereby enabling us to encode product lines of transformations as well as product lines of dataflow programs. We describe the implementation of extensions in the ReFlO tool and present two non-trivial case studies as evidence of our work’s generality
Resumo:
We study the problem of privacy-preserving proofs on authenticated data, where a party receives data from a trusted source and is requested to prove computations over the data to third parties in a correct and private way, i.e., the third party learns no information on the data but is still assured that the claimed proof is valid. Our work particularly focuses on the challenging requirement that the third party should be able to verify the validity with respect to the specific data authenticated by the source — even without having access to that source. This problem is motivated by various scenarios emerging from several application areas such as wearable computing, smart metering, or general business-to-business interactions. Furthermore, these applications also demand any meaningful solution to satisfy additional properties related to usability and scalability. In this paper, we formalize the above three-party model, discuss concrete application scenarios, and then we design, build, and evaluate ADSNARK, a nearly practical system for proving arbitrary computations over authenticated data in a privacy-preserving manner. ADSNARK improves significantly over state-of-the-art solutions for this model. For instance, compared to corresponding solutions based on Pinocchio (Oakland’13), ADSNARK achieves up to 25× improvement in proof-computation time and a 20× reduction in prover storage space.
Resumo:
This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.
Resumo:
Vectorial Boolean function, almost bent, almost perfect nonlinear, affine equivalence, CCZ-equivalence
Resumo:
The author proves that equation, Σy n ΣZx | ΣxyZx ΣxZx ΣxZ2x | = 0, Σy ΣZx Σy2x | where Z = 10-cq and q is a numerical constant, used by Pimentel Gomes and Malavolta in several articles for the interpolation of Mitscherlih's equation y = A [ 1 - 10 - c (x + b) ] by the least squares method, always has a zero of order three for Z = 1. Therefore, equation A Zm + A1Zm -1 + ........... + Am = 0 obtained from that determinant can be divided by (Z-1)³. This property provides a good test for the correctness of the computations and facilitates the solution of the equation.
Resumo:
The authors carried out 3 experiments on the sampling of sugar cane for technological determinations, one with each of the varieties Co 419, CB 40-69 and CB 41-58, in Piracicaba, State of São Paulo, Brasil. The main intent of the project was to compare 2 methods of sampling, namely: 1) Method A, where the sample is a hill (CATANI et al, 1959) or, more generally, 20 stalks all together in a randomly selected point of the furrow; 2) Method B, where 20 stalks are taken, from 20 points evenly spread but on the whole plot. Coefficients of variation for 20 stalk samples Variety Characteristic 20 stalks per hill 1 stalk per hill Brix 4.8% 1.9% Pol 6.4% 2.5% CB 40-69 Coefficient of purity 2.1% 0.83% Available sucrose 7.3% 2.7% Weight 6.6% 6.9% Brix 5.3% 1.8% Pol 7.6% 2.6% Co 419 Coefficient of purity 2.9% 1.0% Available sucrose 8.6% 3.0% Weight 21.2% 6.5% Brix 2.8% 1.4% Pol 4.1% 1.9% CB 41-58 Coefficient of purity 1.8% 0.8% Available sucrose 5.0% 2.2% Weight 10.9% 6.2% For the 3 varieties studied and for the data on Brix, pol, coefficient of purity, available sucrose and weight, analyses of variance were carried out. Further computations led to the following coefficients of variation. For available sucrose, which is probably the most important characteristic studied, the average coefficient of variation for the 3 varieties was 2.7%, for the case of method B, that is, 20 stalk samples, one stalk per hill. Assuming this coefficient of variation, in a trial with 5 treatments and 6 replications, in randomised blocks, the least significant difference among treatment means, at the 5% level, would be 4.7% of available sucrose by Tukey's test, and 3.3% by the t test. For the case of method A the average coefficient of variation is 7.0% and, in similar conditions, the least significant difference would be 15.1% by Tukey's test, and 12.1% by the t test. Since differences of available sucrose among treatments in experiments with fertilizers seldom are higher than 3 or 4% of the mean (PIMENTEL GOMES & CARDOSO, 1958), method B with a 20 stalk sample per plot gives more or less the minimum amount of cane to be sampled for technological determinations. In experiments with varieties, however, where differences may be assumed to be higher, a sample of 10 to 20 stalks one per hill, can be enough.
Resumo:
Abstract : Auditory spatial functions are of crucial importance in everyday life. Determining the origin of sound sources in space plays a key role in a variety of tasks including orientation of attention, disentangling of complex acoustic patterns reaching our ears in noisy environments. Following brain damage, auditory spatial processing can be disrupted, resulting in severe handicaps. Complaints of patients with sound localization deficits include the inability to locate their crying child or being over-loaded by sounds in crowded public places. Yet, the brain bears a large capacity for reorganization following damage and/or learning. This phenomenon is referred as plasticity and is believed to underlie post-lesional functional recovery as well as learning-induced improvement. The aim of this thesis was to investigate the organization and plasticity of different aspects of auditory spatial functions. Overall, we report the outcomes of three studies: In the study entitled "Learning-induced plasticity in auditory spatial representations" (Spierer et al., 2007b), we focused on the neurophysiological and behavioral changes induced by auditory spatial training in healthy subjects. We found that relatively brief auditory spatial discrimination training improves performance and modifies the cortical representation of the trained sound locations, suggesting that cortical auditory representations of space are dynamic and subject to rapid reorganization. In the same study, we tested the generalization and persistence of training effects over time, as these are two determining factors in the development of neurorehabilitative intervention. In "The path to success in auditory spatial discrimination" (Spierer et al., 2007c), we investigated the neurophysiological correlates of successful spatial discrimination and contribute to the modeling of the anatomo-functional organization of auditory spatial processing in healthy subjects. We showed that discrimination accuracy depends on superior temporal plane (STP) activity in response to the first sound of a pair of stimuli. Our data support a model wherein refinement of spatial representations occurs within the STP and that interactions with parietal structures allow for transformations into coordinate frames that are required for higher-order computations including absolute localization of sound sources. In "Extinction of auditory stimuli in hemineglect: space versus ear" (Spierer et al., 2007a), we investigated auditory attentional deficits in brain-damaged patients. This work provides insight into the auditory neglect syndrome and its relation with neglect symptoms within the visual modality. Apart from contributing to a basic understanding of the cortical mechanisms underlying auditory spatial functions, the outcomes of the studies also contribute to develop neurorehabilitation strategies, which are currently being tested in clinical populations.
Resumo:
In this paper we consider a representative a priori unstable Hamiltonian system with 2+1/2 degrees of freedom, to which we apply the geometric mechanism for diffusion introduced in the paper Delshams et al., Mem.Amer.Math. Soc. 2006, and generalized in Delshams and Huguet, Nonlinearity 2009, and provide explicit, concrete and easily verifiable conditions for the existence of diffusing orbits. The simplification of the hypotheses allows us to perform explicitly the computations along the proof, which contribute to present in an easily understandable way the geometric mechanism of diffusion. In particular, we fully describe the construction of the scattering map and the combination of two types of dynamics on a normally hyperbolic invariant manifold.
Resumo:
Debido al gran número de transistores por mm2 que hoy en día podemos encontrar en las GPU convencionales, en los últimos años éstas se vienen utilizando para propósitos generales gracias a que ofrecen un mayor rendimiento para computación paralela. Este proyecto implementa el producto sparse matrix-vector sobre OpenCL. En los primeros capítulos hacemos una revisión de la base teórica necesaria para comprender el problema. Después veremos los fundamentos de OpenCL y del hardware sobre el que se ejecutarán las librerías desarrolladas. En el siguiente capítulo seguiremos con una descripción del código de los kernels y de su flujo de datos. Finalmente, el software es evaluado basándose en comparativas con la CPU.
Resumo:
The functional architecture of the occipital cortex is being studied with increasing detail. Functional and structural MR based imaging are altering views about the organisation of the human visual system. Recent advances have ranged from comparative studies with non-human primates to predictive scanning. The latter multivariate technique describes with sub-voxel resolution patterns of activity that are characteristic of specific visual experiences. One can deduce what a subject experienced visually from the pattern of cortical activity recorded. The challenge for the future is to understand visual functions in terms of cerebral computations at a mesoscopic level of description and to relate this information to electrophysiology. The principal medical application of this new knowledge has focused to a large extent on plasticity and the capacity for functional reorganisation. Crossmodality visual-sensory interactions and cross-correlations between visual and other cerebral areas in the resting state are areas of considerable current interest. The lecture will review findings over the last two decades and reflect on possible roles for imaging studies in the future.