437 resultados para Computação reconfigurável
Resumo:
This work treats of an implementation OFDMA baseband processor in hardware for LTE Downlink. The LTE or Long Term Evolution consist the last stage of development of the technology called 3G (Mobile System Third Generation) which offers an increasing in data rate and more efficiency and flexibility in transmission with application of advanced antennas and multiple carriers techniques. This technology applies in your physical layer the OFDMA technical (Orthogonal Frequency Division Multiple Access) for generation of signals and mapping of physical resources in downlink and has as base theoretical to OFDM multiple carriers technique (Orthogonal Frequency Division Multiplexing). With recent completion of LTE specifications, different hardware solutions have been developed, mainly, to the level symbol processing where the implementation of OFDMA processor in base band is commonly considered, because it is also considered a basic architecture of others important applications. For implementation of processor, the reconfigurable hardware offered by devices as FPGA are considered which shares not only to meet the high requirements of flexibility and adaptability of LTE as well as offers possibility of an implementation quick and efficient. The implementation of processor in reconfigurable hardware meets the specifications of LTE physical layer as well as have the flexibility necessary for to meet others standards and application which use OFDMA processor as basic architecture for your systems. The results obtained through of simulation and verification functional system approval the functionality and flexibility of processor implemented
Resumo:
One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism
Resumo:
In this work will applied the technique of Differential Cryptanalysis, introduced in 1990 by Biham and Shamir, on Papílio s cryptosystem, developed by Karla Ramos, to test and most importantly, to prove its relevance to other block ciphers such as DES, Blowfish and FEAL-N (X). This technique is based on the analysis of differences between plaintext and theirs respective ciphertext, in search of patterns that will assist in the discovery of the subkeys and consequently in the discovery of master key. These differences are obtained by XOR operations. Through this analysis, in addition to obtaining patterns of Pap´ılio, it search to obtain also the main characteristics and behavior of Papilio throughout theirs 16 rounds, identifying and replacing when necessary factors that can be improved in accordance with pre-established definitions of the same, thus providing greater security in the use of his algoritm
Resumo:
With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios
Resumo:
Reverberation is caused by the reflection of the sound in adjacent surfaces close to the sound source during its propagation to the listener. The impulsive response of an environment represents its reverberation characteristics. Being dependent on the environment, reverberation takes to the listener characteristics of the space where the sound is originated and its absence does not commonly sounds like “natural”. When recording sounds, it is not always possible to have the desirable characteristics of reverberation of an environment, therefore methods for artificial reverberation have been developed, always seeking a more efficient implementations and more faithful to the real environments. This work presents an implementation in FPGAs (Field Programmable Gate Arrays ) of a classic digital reverberation audio structure, based on a proposal of Manfred Schroeder, using sets of all-pass and comb filters. The developed system exploits the use of reconfigurable hardware as a platform development and implementation of digital audio effects, focusing on the modularity and reuse characteristics
Resumo:
This work aims to understand how cloud computing contextualizes the IT government and decision agenda, in the light of the multiple streams model, considering the current status of public IT policies, the dynamics of the agenda setting for the area, the interface between the various institutions, and existing initiatives on the use of cloud computing in government. Therefore, a qualitative study was conducted through interviews with a group of policy makers and the other group consists of IT managers. As analysis technique, this work made use of content analysis and analysis of documents, with some results by word cloud. As regards the main results to overregulation to the area, usually scattered in various agencies of the federal government, which hinders the performance of the managers. Identified a lack of knowledge of standards, government programs, regulations and guidelines. Among these he highlighted a lack of understanding of the TI Maior Program, the lack of effectiveness of the National Broadband Plan in view of the respondents, as well as the influence of Internet Landmark as an element that can jam the advances in the use of computing cloud in the Brazilian government. Also noteworthy is the bureaucratization of the acquisition of goods to IT services, limited, in many cases, technological advances. Regarding the influence of the actors, it was not possible to identify the presence of a political entrepreneur, and it was noticed a lack of political force. Political flow was affected only by changes within the government. Fragmentation was a major factor for the theme of weakening the agenda formation. Information security was questioned by the respondents pointed out that the main limitation coupled with the lack of training of public servants. In terms of benefits, resource economy is highlighted, followed by improving efficiency. Finally, the discussion about cloud computing needs to advance within the public sphere, whereas the international experience is already far advanced, framing cloud computing as a responsible element for the improvement of processes, services and economy of public resources
Resumo:
This work proposes the use of the behavioral model of the hysteresis loop of the ferroelectrics capacitor as a new alternative to the usually costly techniques in the computation of nonlinear functions in artificial neurons implemented on reconfigurable hardware platform, in this case, a FPGA device. Initially the proposal has been validated by the implementation of the boolean logic through the digital models of two artificial neurons: the Perceptron and a variation of the model Integrate and Fire Spiking Neuron, both using the model also digital of the hysteresis loop of the ferroelectric capacitor as it’s basic nonlinear unit for the calculations of the neurons outputs. Finally, it has been used the analog model of the ferroelectric capacitor with the goal of verifying it’s effectiveness and possibly the reduction of the number of necessary logic elements in the case of implementing the artificial neurons on integrated circuit. The implementations has been carried out by Simulink models and the synthesizing has been done through the DSP Builder software from Altera Corporation.
Resumo:
This Thesis main objective is to implement a supporting architecture to Autonomic Hardware systems, capable of manage the hardware running in reconfigurable devices. The proposed architecture implements manipulation, generation and communication functionalities, using the Context Oriented Active Repository approach. The solution consists in a Hardware-Software based architecture called "Autonomic Hardware Manager (AHM)" that contains an Active Repository of Hardware Components. Using the repository the architecture will be able to manage the connected systems at run time allowing the implementation of autonomic features such as self-management, self-optimization, self-description and self-configuration. The proposed architecture also contains a meta-model that allows the representation of the Operating Context for hardware systems. This meta-model will be used as basis to the context sensing modules, that are needed in the Active Repository architecture. In order to demonstrate the proposed architecture functionalities, experiments were proposed and implemented in order to proof the Thesis hypothesis and achieved objectives. Three experiments were planned and implemented: the Hardware Reconfigurable Filter, that consists of an application that implements Digital Filters using reconfigurable hardware; the Autonomic Image Segmentation Filter, that shows the project and implementation of an image processing autonomic application; finally, the Autonomic Autopilot application that consist of an auto pilot to unmanned aerial vehicles. In this work, the applications architectures were organized in modules, according their functionalities. Some modules were implemented using HDL and synthesized in hardware. Other modules were implemented kept in software. After that, applications were integrated to the AHM to allow their adaptation to different Operating Context, making them autonomic.
Resumo:
This work presents an application of a hybrid Fuzzy-ELECTRE-TOPSIS multicriteria approach for a Cloud Computing Service selection problem. The research was exploratory, using a case of study based on the actual requirements of professionals in the field of Cloud Computing. The results were obtained by conducting an experiment aligned with a Case of Study using the distinct profile of three decision makers, for that, we used the Fuzzy-TOPSIS and Fuzzy-ELECTRE-TOPSIS methods to obtain the results and compare them. The solution includes the Fuzzy sets theory, in a way it could support inaccurate or subjective information, thus facilitating the interpretation of the decision maker judgment in the decision-making process. The results show that both methods were able to rank the alternatives from the problem as expected, but the Fuzzy-ELECTRE-TOPSIS method was able to attenuate the compensatory character existing in the Fuzzy-TOPSIS method, resulting in a different alternative ranking. The attenuation of the compensatory character stood out in a positive way at ranking the alternatives, because it prioritized more balanced alternatives than the Fuzzy-TOPSIS method, a factor that has been proven as important at the validation of the Case of Study, since for the composition of a mix of services, balanced alternatives form a more consistent mix when working with restrictions.
Resumo:
Launching centers are designed for scientific and commercial activities with aerospace vehicles. Rockets Tracking Systems (RTS) are part of the infrastructure of these centers and they are responsible for collecting and processing the data trajectory of vehicles. Generally, Parabolic Reflector Radars (PRRs) are used in RTS. However, it is possible to use radars with antenna arrays, or Phased Arrays (PAs), so called Phased Arrays Radars (PARs). Thus, the excitation signal of each radiating element of the array can be adjusted to perform electronic control of the radiation pattern in order to improve functionality and maintenance of the system. Therefore, in the implementation and reuse projects of PARs, modeling is subject to various combinations of excitation signals, producing a complex optimization problem due to the large number of available solutions. In this case, it is possible to use offline optimization methods, such as Genetic Algorithms (GAs), to calculate the problem solutions, which are stored for online applications. Hence, the Genetic Algorithm with Maximum-Minimum Crossover (GAMMC) optimization method was used to develop the GAMMC-P algorithm that optimizes the modeling step of radiation pattern control from planar PAs. Compared with a conventional crossover GA, the GAMMC has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, the GAMMC prevents premature convergence, increases population fitness and reduces the processing time. Therefore, the GAMMC-P uses a reconfigurable algorithm with multiple objectives, different coding and genetic operator MMC. The test results show that GAMMC-P reached the proposed requirements for different operating conditions of a planar RAV.
Resumo:
The employment of flexibility in the design of façades makes them adaptable to adverse weather conditions, resulting in both minimization of environmental discomfort and improvement of energy efficiency. The present study highlights the potential of flexible façades as a resource to reduce rigidity and form repetition, which are usually employed in condominiums of standardized houses; as such, the work presented herein contributes to field of study of architectural projects strategies for adapting and integrating buildings within the local climate context. Two façade options were designed using as reference the bionics and the kinetics, as well as their applications to architectural constructions. This resulted in two lightweight and dynamic structures, which cater to constraints of comfort through combinations of movements, which control the impact of solar radiation and of cooling in the environment. The efficacy and technical functionality of the façades were tested with comfort analysis and graphic computation software, as well as with physical models. Thus, the current research contributes to the improvement of architectural solutions aimed at using passive energy strategies in order to offer both better quality for the users and for the sustainability of the planet
Resumo:
The inter-subjectivity is the answer in the search for the solution of complex problems, which concerns interfaces of knowledge, respecting their borders. This paradigm is essential in the author's work. So, the search on screen is based on this perspective, by using inter-subject groups of work conduced by professionals of Computer Science, Social Communication, Architecture and Urbanism, Pedagogy, Psicopegagogy, Nutritional Science, Endocrinology, Occupational Therapy and Nursing, it was also part of this group an 8 year old child, daughter of one of the professional who took part of the group. This thesis aims to present the course of investigation developed, analyzing the action of inter-subject Occupational Therapy and Nutrition on the promotion of learning nutritional concepts through educative-nutritional games in order to prevent child's obesity in an educative context. The research was analytic, interventionist and almost experimental. It took place in a public school in Fortaleza, Ceará, Brazil, between August and December 2004. It was selected a sample non-probabilistic, by convenience, of 200 children, born from 1994 to 1996. It was selected almost nonprobabilistically, by convenience, 200 children born between 1994 and 1996. To analyze the results it was used a triangulation, associated by quantitative and qualitative approaches. The basis collect happened through games specially manufactured to these research- video-games, board games, memory games, puzzles, scramble, searching words and iterative basics. There were semi-structured interviews, direct and structured observations and focus in-groups. It was noticed the efficiency of educativenutritional games in the learning process, which lead to a changing of attitude towards the eating choices. These games gave similar results in relation to the compared variations preferences, experience and attitudes, theses attitudes were observed through the game; and the categories to compare the possibility of learning by playing, the fantasy in the learning process, learning concepts of nutritional education and the need of help in the learning process (mediation). It was proved that educativenutritional games could be used to teach nutritional concepts, in an inter-subjective action of Occupational Therapy and Nutrition in schools. The simultaneous application of these games lead to the optimization of child s learning process. It should be emphasized the need of studies about the adaptation of tools used in a child s Nutritional Education, with the help of inter-subjective action. Because just one subject, in a fractionated way can give an answer to complex problems and help to a change of the reality with effectiveness and resolution
Resumo:
In this thesis, it is developed the robustness and stability analysis of a variable structure model reference adaptive controller considering the presence of disturbances and unmodeled dynamics. The controller is applied to uncertain, monovariable, linear time-invariant plants with relative degree one, and its development is based on the indirect adaptive control. In the direct approach, well known in the literature, the switching laws are designed for the controller parameters. In the indirect one, they are designed for the plant parameters and, thus, the selection of the relays upper bounds becomes more intuitive, whereas they are related to physical parameters, which present uncertainties that can be known easier, such as resistances, capacitances, inertia moments and friction coefficients. Two versions for the controller algorithm with the stability analysis are presented. The global asymptotic stability with respect to a compact set is guaranteed for both cases. Simulation results under adverse operation conditions in order to verify the theoretical results and to show the performance and robustness of the proposed controller are showed. Moreover, for practical purposes, some simplifications on the original algorithm are developed
Resumo:
In order to guarantee database consistency, a database system should synchronize operations of concurrent transactions. The database component responsible for such synchronization is the scheduler. A scheduler synchronizes operations belonging to different transactions by means of concurrency control protocols. Concurrency control protocols may present different behaviors: in general, a scheduler behavior can be classified as aggressive or conservative. This paper presents the Intelligent Transaction Scheduler (ITS), which has the ability to synchronize the execution of concurrent transactions in an adaptive manner. This scheduler adapts its behavior (aggressive or conservative), according to the characteristics of the computing environment in which it is inserted, using an expert system based on fuzzy logic. The ITS can implement different correctness criteria, such as conventional (syntactic) serializability and semantic serializability. In order to evaluate the performance of the ITS in relation to others schedulers with exclusively aggressive or conservative behavior, it was applied in a dynamic environment, such as a Mobile Database Community (MDBC). An MDBC simulator was developed and many sets of tests were run. The experimentation results, presented herein, prove the efficiency of the ITS in synchronizing transactions in a dynamic environment
Resumo:
The objective of this thesis is proposes a method for a mobile robot to build a hybrid map of an indoor, semi-structured environment. The topological part of this map deals with spatial relationships among rooms and corridors. It is a topology-based map, where the edges of the graph are rooms or corridors, and each link between two distinct edges represents a door. The metric part of the map consists in a set of parameters. These parameters describe a geometric figure which adapts to the free space of the local environment. This figure is calculated by a set of points which sample the boundaries of the local free space. These points are obtained with range sensors and with knowledge about the robot s pose. A method based on generalized Hough transform is applied to this set of points in order to obtain the geomtric figure. The building of the hybrid map is an incremental procedure. It is accomplished while the robot explores the environment. Each room is associated with a metric local map and, consequently, with an edge of the topo-logical map. During the mapping procedure, the robot may use recent metric information of the environment to improve its global or relative pose