997 resultados para Computational architecture
Resumo:
We discuss from a practical point of view a number of issues involved in writing Internet and WWW applications using LP/CLP systems. We describe PiLLoW, an Internet and WWW programming library for LP/CLP systems which we argüe significantly simplifies the process of writing such applications. PiLLoW provides facilities for generating HTML structured documents, producing HTML forms, writing form handlers, accessing and parsing WWW documents, and accessing code posted at HTTP addresses. We also describe the architecture of some application classes, using a high-level model of client-server interaction, active modules. Finally we describe an architecture for automatic LP/CLP code downloading for local execution, using generic browsers. The PiLLoW library has been developed in the context of the &-Prolog and CIAO systems, but it has been adapted to a number of popular LP/CLP systems, supporting most of its functionality.
Resumo:
In this work, we propose the Networks of Evolutionary Processors (NEP) [2] as a computational model to solve problems related with biological phenomena. In our first approximation, we simulate biological processes related with cellular signaling and their implications in the metabolism, by using an architecture based on NEP (NEP architecture) and their specializations: Networks of Polarized Evolutionary Processors (NPEP) [1] and NEP Transducers (NEPT) [3]. In particular, we use this architecture to simulate the interplay between cellular processes related with the metabolism as the Krebs cycle and the malate-aspartate shuttle pathway (MAS) both being altered by signaling by calcium.
Resumo:
n this paper we propose the use of Networks of Bio-inspired Processors (NBP) to model some biological phenomena within a computational framework. In particular, we propose the use of an extension of NBP named Network Evolutionary Processors Transducers to simulate chemical transformations of substances. Within a biological process, chemical transformations of substances are basic operations in the change of the state of the cell. Previously, it has been proved that NBP are computationally complete, that is, they are able to solve NP complete problems in linear time, using massively parallel computations. In addition, we propose a multilayer architecture that will allow us to design models of biological processes related to cellular communication as well as their implications in the metabolic pathways. Subsequently, these models can be applied not only to biological-cellular instances but, possibly, also to configure instances of interactive processes in many other fields like population interactions, ecological trophic networks, in dustrial ecosystems, etc.
Resumo:
Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
The sustainability strategy in urban spaces arises from reflecting on how to achieve a more habitable city and is materialized in a series of sustainable transformations aimed at humanizing different environments so that they can be used and enjoyed by everyone without exception and regardless of their ability. Modern communication technologies allow new opportunities to analyze efficiency in the use of urban spaces from several points of view: adequacy of facilities, usability, and social integration capabilities. The research presented in this paper proposes a method to perform an analysis of movement accessibility in sustainable cities based on radio frequency technologies and the ubiquitous computing possibilities of the new Internet of Things paradigm. The proposal can be deployed in both indoor and outdoor environments to check specific locations of a city. Finally, a case study in a controlled context has been simulated to validate the proposal as a pre-deployment step in urban environments.
Resumo:
We obtained an analytical expression for the computational complexity of many layered committee machines with a finite number of hidden layers (L < 8) using the generalization complexity measure introduced by Franco et al (2006) IEEE Trans. Neural Netw. 17 578. Although our result is valid in the large-size limit and for an overlap synaptic matrix that is ultrametric, it provides a useful tool for inferring the appropriate architecture a network must have to reproduce an arbitrary realizable Boolean function.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, its distributed and massively parallel nature obtain polynomial solutions opposite to traditional non-polynomial ones. Nowadays, developed investigation for implementing membrane systems has not yet reached the massively parallel character of this computational model. Better published approaches have achieved a distributed architecture denominated “partially parallel evolution with partially parallel communication” where several membranes are allocated at each processor, proxys are used to communicate with membranes allocated at different processors and a policy of access control to the communications is mandatory. With these approaches, it is obtained processors parallelism in the application of evolution rules and in the internal communication among membranes allocated inside each processor. Even though, external communications share a common communication line, needed for the communication among membranes arranged in different processors, are sequential. In this work, we present a new hierarchical architecture that reaches external communication parallelism among processors and substantially increases parallelization in the application of evolution rules and internal communications. Consequently, necessary time for each evolution step is reduced. With all of that, this new distributed hierarchical architecture is near to the massively parallel character required by the model.
Resumo:
Никола Вълчанов, Тодорка Терзиева, Владимир Шкуртов, Антон Илиев - Една от основните области на приложения на компютърната информатика е автоматизирането на математическите изчисления. Информационните системи покриват различни области като счетоводство, електронно обучение/тестване, симулационни среди и т. н. Те работят с изчислителни библиотеки, които са специфични за обхвата на системата. Въпреки, че такива системи са перфектни и работят безпогрешно, ако не се поддържат остаряват. В тази работа описваме механизъм, който използва динамично библиотеките за изчисления и взема решение по време на изпълнение (интелигентно или интерактивно) за това как и кога те да се използват. Целта на тази статия е представяне на архитектура за системи, управлявани от изчисления. Тя се фокусира върху ползите от използването на правилните шаблони за дизайн с цел да се осигури разширяемост и намаляване на сложността.
Resumo:
The technology of Organic Light-Emitting Diodes has reached such a high level of reliability that it can be used in various applications. The required light emission efficiency can be achieved by transforming the triplet excitons into singlet states through Reverse InterSystem Crossing (RISC), which is the main process of a general mechanism called thermally activated delayed fluorescence (TADF). In this thesis, we theoretically analyzed two carbazole-benzonitrile (donor-acceptor) derivatives, 2,5-di(9H-carbazol-9-yl)benzonitrile (p-2CzBN) and 2,3,4,5,6-penta(9H-carbazol-9-yl)benzonitrile (5CzBN), and addressed the problem of how donor-acceptor (D-A) or donor-acceptor-donor (D-A-D) flexible molecular architectures influence the nature of the excited states and the emission intensity. Furthermore, we analyzed the RISC rates as a function of the conformation of the carbazole lateral groups, considering the first electronic states, S0, S1, T1 and T2, involved in TADF process. The two prototype molecules, p-2CzBN and 5CzBN, have a similar energy gap between the first singlet and triplet states (∆EST, a key parameter in the RISC rate), but different TADF performances. Therefore, other parameters must be considered to explain their different behavior. The oscillator strength of p-2CzBN, never tested as emitter in OLEDs, is similar to that of 5CzBN, which is an active TADF molecule. We also note that the presence of a second T2 triplet state, lower in energy than S1 only in 5CzBN, and the reorganization energies, associated with RISC processes involving T1 and T2, are important factors in differentiating the rates in p-2CzBN and 5CzBN. For p-2CzBN, the RISC rate from T2 to S1 is surprisingly higher than that from T1 to S1, in disagreement with El-Sayed rules, due to a large reorganization energy associated to the T1 to S1, process; while the contrary occurs for 5CzBN. These insights are important for designing new TADF emitters based on the benzo-carbazole architecture.
Resumo:
Polymerases and nucleases are enzymes processing DNA and RNA. They are involved in crucial processes for cell life by performing the extension and the cleavage of nucleic acid chains during genome replication and maintenance. Additionally, both enzymes are often associated to several diseases, including cancer. In order to catalyze the reaction, most of them operate via the two-metal-ion mechanism. For this, despite showing relevant differences in structure, function and catalytic properties, they share common catalytic elements, which comprise the two catalytic ions and their first-shell acidic residues. Notably, recent studies of different metalloenzymes revealed the recurrent presence of additional elements surrounding the active site, thus suggesting an extended two-metal-ion-centered architecture. However, whether these elements have a catalytic function and what is their role is still unclear. In this work, using state-of-the-art computational techniques, second- and third-shell elements are showed to act in metallonucleases favoring the substrate positioning and leaving group release. In particular, in hExo1 a transient third metal ion is recruited and positioned near the two-metal-ion site by a structurally conserved acidic residue to assist the leaving group departure. Similarly, in hFEN1 second- and third-shell Arg/Lys residues operate the phosphate steering mechanism through (i) substrate recruitment, (ii) precise cleavage localization, and (iii) leaving group release. Importantly, structural comparisons of hExo1, hFEN1 and other metallonucleases suggest that similar catalytic mechanisms may be shared by other enzymes. Overall, the results obtained provide an extended vision on parallel strategies adopted by metalloenzymes, which employ divalent metal ion or positively charged residues to ensure efficient and specific catalysis. Furthermore, these outcomes may have implications for de novo enzyme engineering and/or drug design to modulate nucleic acid processing.
Resumo:
Cancer is a challenging disease that involves multiple types of biological interactions in different time and space scales. Often computational modelling has been facing problems that, in the current technology level, is impracticable to represent in a single space-time continuum. To handle this sort of problems, complex orchestrations of multiscale models is frequently done. PRIMAGE is a large EU project that aims to support personalized childhood cancer diagnosis and prognosis. The goal is to do so predicting the growth of the solid tumour using multiscale in-silico technologies. The project proposes an open cloud-based platform to support decision making in the clinical management of paediatric cancers. The orchestration of predictive models is in general complex and would require a software framework that support and facilitate such task. The present work, proposes the development of an updated framework, referred herein as the VPH-HFv3, as a part of the PRIMAGE project. This framework, a complete re-writing with respect to the previous versions, aims to orchestrate several models, which are in concurrent development, using an architecture as simple as possible, easy to maintain and with high reusability. This sort of problem generally requires unfeasible execution times. To overcome this problem was developed a strategy of particularisation, which maps the upper-scale model results into a smaller number and homogenisation which does the inverse way and analysed the accuracy of this approach.
Resumo:
When it comes to designing a structure, architects and engineers want to join forces in order to create and build the most beautiful and efficient building. From finding new shapes and forms to optimizing the stability and the resistance, there is a constant link to be made between both professions. In architecture, there has always been a particular interest in creating new shapes and types of a structure inspired by many different fields, one of them being nature itself. In engineering, the selection of optimum has always dictated the way of thinking and designing structures. This mindset led through studies to the current best practices in construction. However, both disciplines were limited by the traditional manufacturing constraints at a certain point. Over the last decades, much progress was made from a technological point of view, allowing to go beyond today's manufacturing constraints. With the emergence of Wire-and-Arc Additive Manufacturing (WAAM) combined with Algorithmic-Aided Design (AAD), architects and engineers are offered new opportunities to merge architectural beauty and structural efficiency. Both technologies allow for exploring and building unusual and complex structural shapes in addition to a reduction of costs and environmental impacts. Through this study, the author wants to make use of previously mentioned technologies and assess their potential, first to design an aesthetically appreciated tree-like column with the idea of secondly proposing a new type of standardized and optimized sandwich cross-section to the construction industry. Parametric algorithms to model the dendriform column and the new sandwich cross-section are developed and presented in detail. A catalog draft of the latter and methods to establish it are then proposed and discussed. Finally, the buckling behavior of this latter is assessed considering standard steel and WAAM material properties.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
This paper proposes an architecture for machining process and production monitoring to be applied in machine tools with open Computer numerical control (CNC). A brief description of the advantages of using open CNC for machining process and production monitoring is presented with an emphasis on the CNC architecture using a personal computer (PC)-based human-machine interface. The proposed architecture uses the CNC data and sensors to gather information about the machining process and production. It allows the development of different levels of monitoring systems with mininium investment, minimum need for sensor installation, and low intrusiveness to the process. Successful examples of the utilization of this architecture in a laboratory environment are briefly described. As a Conclusion, it is shown that a wide range of monitoring solutions can be implemented in production processes using the proposed architecture.