888 resultados para Input and outputs
Resumo:
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Resumo:
Tabletop computers featuring multi-touch input and object tracking are a common platform for research on Tangible User Interfaces (also known as Tangible Interaction). However, such systems are confined to sensing activity on the tabletop surface, disregarding the rich and relatively unexplored interaction canvas above the tabletop. This dissertation contributes with tCAD, a 3D modeling tool combining fiducial marker tracking, finger tracking and depth sensing in a single system. This dissertation presents the technical details of how these features were integrated, attesting to its viability through the design, development and early evaluation of the tCAD application. A key aspect of this work is a description of the interaction techniques enabled by merging tracked objects with direct user input on and above a table surface.
Resumo:
This thesis pursuits to contextualize the theoretical debate between the implementation of public education policy of the Federal Government focused in a distance learning and legal foundations for its enforcement, in order to raise questions and comments on the topic in question. Its importance is back to provide scientific input and can offer to the academy, particularly in the UFRN, and elements of society to question and rethink the complex relationship between the socio-economic and geographic access to higher education. It consists of a descriptive study on the institutionalization of distance education in UFRN as a mechanism for expanding access to higher education, for both, the research seeks to understand if the distance undergraduate courses offered by the UAB system and implemented at UFRN, promote expanding access to higher education, as it is during implementation that the rules, routines and social processes are converted from intentions to action. The discussion of this study lasted between two opposing views of Implementation models: Top-down and Bottom-up. It is worth noting that the documents PNE, PDE and programs and UAB MEETING reflect positively in improving the educational level of the population of the country It is a qualitative study, using the means Bibliographic, Document and Field Study, where they were performed 04 (four) in 2010 interviews with the management framework SEDIS / UAB in UFRN. The data were analyzed and addressed through techniques: Document Analysis and Content Analysis. The results show that the process of implementation of distance education at UFRN is in progress. According to our results, the research objective is achieved, but there was a need to rethink the conditions of the infrastructure of poles, the structure of the academic calendar, the management of the SEDIS UFRN, regarding the expansion of existing vacancies and the supply of new courses by the need for a redesign as the Secretariat's ability to hold the offerings of undergraduate courses offered by the Federal Government to be implemented in the institution. It was also found that levels of evasion still presents a challenge to the teaching model. Given the context, we concluded that the greatest contribution of UAB and consequently UFRN by distance learning for undergraduate courses (Bachelor in Mathematics, Physics, Chemistry, Geography and Biological Sciences, beyond the bachelor's degrees in Business and Public Administration ) is related to increasing the number of vacancies and accessibility of a population that was previously deprived of access to university
Resumo:
Currently the uncertain system has attracted much academic community from the standpoint of scientific research and also practical applications. A series of mathematical approaches emerge in order to troubleshoot the uncertainties of real physical systems. In this context, the work presented here focuses on the application of control theory in a nonlinear dynamical system with parametric variations in order and robustness. We used as the practical application of this work, a system of tanks Quanser associates, in a configuration, whose mathematical model is represented by a second order system with input and output (SISO). The control system is performed by PID controllers, designed by various techniques, aiming to achieve robust performance and stability when subjected to parameter variations. Other controllers are designed with the intention of comparing the performance and robust stability of such systems. The results are obtained and compared from simulations in Matlab-simulink.
Resumo:
The increasing of the number of attacks in the computer networks has been treated with the increment of the resources that are applied directly in the active routers equip-ments of these networks. In this context, the firewalls had been consolidated as essential elements in the input and output control process of packets in a network. With the advent of intrusion detectors systems (IDS), efforts have been done in the direction to incorporate packets filtering based in standards of traditional firewalls. This integration incorporates the IDS functions (as filtering based on signatures, until then a passive element) with the already existing functions in firewall. In opposite of the efficiency due this incorporation in the blockage of signature known attacks, the filtering in the application level provokes a natural retard in the analyzed packets, and it can reduce the machine performance to filter the others packets because of machine resources demand by this level of filtering. This work presents models of treatment for this problem based in the packets re-routing for analysis by a sub-network with specific filterings. The suggestion of implementa- tion of this model aims reducing the performance problem and opening a space for the consolidation of scenes where others not conventional filtering solutions (spam blockage, P2P traffic control/blockage, etc.) can be inserted in the filtering sub-network, without inplying in overload of the main firewall in a corporative network
Resumo:
This work presents a suggestion of a security system of land automation having as objective main the creation of a system capable from easy method, supervise the installations of a building with the goal to preserver security personal and patrimonial necessities, aim at portability low cost and use easiness. Was designed an alarms central and access controller that has digital and analogical inputs for sensors and outputs for buzzer, telephonic dialing and electronic lock. The system is supervised by software that makes solicitations of information to the alarms central through the one computer's serial port (RS-232). The supervisory software was developed in platform LabVIEW with displays the received data on a graphical interface informing the sensors' current states distributed in the building and system events as alarns occurrences. This system also can be viewed through the Internet for people registered by the land security's system administrator
Resumo:
Relevant researches have been growing on electric machine without mancal or bearing and that is generally named bearingless motor or specifically, mancal motor. In this paper it is made an introductory presentation about bearingless motor and its peripherical devices with focus on the design and implementation of sensors and interfaces needed to control rotor radial positioning and rotation of the machine. The signals from the machine are conditioned in analogic inputs of DSP TMS320F2812 and used in the control program. This work has a purpose to elaborate and build a system with sensors and interfaces suitable to the input and output of DSP TMS320F2812 to control a mancal motor, bearing in mind the modularity, simplicity of circuits, low number of power used, good noise imunity and good response frequency over 10 kHz. The system is tested at a modified ordinary induction motor of 3,7 kVA to be used with a bearingless motor with divided coil
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature
Resumo:
The aim of this study is to investigate the development of written Interlanguage in English as an Additional Language (AL) by students in the 2nd grade of Ensino Fundamental I in a bilingual school in the city of Natal-RN. For this purpose two research questions guided this study: (a) which hypotheses could be inferred from the writing development of the bilingual learners of English as AL? and, (b) what is the impact of the type of input monomodal or multimodal in the Interlanguage development in the AL of bilingual learners? The 38 learners were divided into a control group, with 21 learners exposed to monomodal input, and an experimental group, with 17 learners exposed to multimodal input, and pre and post-tests were applied to both groups. A mixed methods research design was conducted (DÖRNYEI, 2007) to involve both qualitative and quantitative data collection and analysis. The qualitative aspect comprehended descriptive characteristics that interpreted the central cognitive processes in the acquisition of writing in AL by the learners. Through these interpretations, it was possible to understand the constitution of written Interlanguage (SELINKER, 1972) according to the data generated by the learners. The quantitative data were presented as the results generated from the experimental design. Thus, they narrowed the relations between the dependent variable the writing development, that is, how close it is to the target form which was modified throughout the process by the independent variable the quality of input (VAN PATTEN, 2002, GASS, 1997, SCHMIDT, 1986, PARADIS, 2009; 2010, ELLIS, 1995), which, being monomodal or multimodal, its function was possibly to alter the route of acquisition. The quantitative results pointed towards significant gains by the experimental group, which had multimodality present, suggesting that the learners in this group seem to have been more able to cognitively register (SCHIMDT, 1990) aspects of learning than the learners in the control group
Resumo:
In almost all cases, the goal of the design of automatic control systems is to obtain the parameters of the controllers, which are described by differential equations. In general, the controller is artificially built and it is possible to update its initial conditions. In the design of optimal quadratic regulators, the initial conditions of the controller can be changed in an optimal way and they can improve the performance of the controlled system. Following this idea, a LNU-based design procedure to update the initial conditions of PI controllers, considering the nonlinear plant described by Takagi-Sugeno fuzzy models, is presented. The importance of the proposed method is that it also allows other specifications, such as, the decay rate and constraints on control input and output. The application in the control of an inverted pendulum illustrates the effectively of proposed method.
Resumo:
Many studies on environmental ecosystems quality related to polycyclic aromatic hydrocarbons (PAH) have been carried out routinely due to their ubiquotus presence worldwide and to their potential toxicity after its biotransformation. PAH may be introduced into the environmet by natural and anthropogenic processes from direct runoff and discharges and indirect atmospheric deposition. Sources of naturally occurring PAHs include natural fires, natural oil seepage and recent biological or diagenetic processes. Anthropogenic sources of PAHs, acute or chronic, are combustion of organic matter (petroleum, coal, wood), waste and releases/spills of petroleum and derivatives (river runoff, sewage outfalls, maritime transport, pipelines). Besides the co-existence of multiples sources of PAH in the environmental samples, these compounds are subject to many processes that lead to geochemical fates (physical-chemical transformation, biodegradation and photo-oxidation), which leads to an alteration of their composition. All these facts make the identification of the hydrocarbons sources, if petrogenic, pyrolytic or natural, a challenge. One of the objectives of this study is to establish tools to identify the origin of hydrocarbons in environmental samples. PAH diagnostic ratios and PAH principal component analysis were tested on a critical area: Guanabara Bay sediments. Guanabara Bay is located in a complex urban area of Rio de Janeiro with a high anthropogenic influence, being an endpoint of chronic pollution from the Greater Rio and it was the scenario of an acute event of oil release in January 2000. It were quantified 38 compounds, parental and alkylated PAH, in 21 sediment samples collected in two surveys: 2000 and 2003. The PAH levels varied from 400 to 58439 ng g-1. Both tested techniques for origin identification of hydrocarbons have shown their applicability, being able to discriminate the PAH sources for the majority of the samples analysed. The bay sediments were separated into two big clusters: sediments with a clear pattern of petrogenic introduction of hydrocarbons (from intertidal area) and sediments with combustion characteristics (from subtidal region). Only a minority of the samples could not display a clear contribution of petrogenic or pyrolytic input. The diagnostic ratios that have exhibited high ability to distinguish combustion- and petroleum-derived PAH inputs for Guanabara Bay sediments were Phenanthrene+Anthracene/(Phenanthrene+Anthracene+C1Phenanthrene); Fluorantene/(Fluorantene+Pyrene); Σ (other 3-6 ring PAHs)/ Σ (5 alkylated PAH series). The PCA results prooved to be a useful tool for PAH source identification in the environment, corroborating the diagnostic indexes. In relation to the temporal evaluation carried out in this study, it was not verified significant changes on the class of predominant source of the samples. This result indicates that the hydrocarbons present in the Guanabara Bay sediments are mainly related to the long-term anthropogenic input and not directly related to acute events such as the oil spill of January 2000. This findings were similar to various international estuarine sites. Finally, this work had a complementary objective of evaluating the level of hydrocarbons exposure of the aquatic organisms of Guanabara Bay. It was a preliminary study in which a quantification of 12 individual biliar metabolites of PAH was performed in four demersal fish representing three different families. The analysed metabolites were 1-hydroxynaphtalene, 2-hidroxinaphtalene, 1hydroxyphenanthrene, 9-hydroxyphenanthrene, 2-hydroxyphenanthrene, 1hydroxypyrene, 3-hidroxibiphenil, 3- hydroxyphenanthrene, 1-hydroxychrysene, 9hydroxyfluorene, 4-hydroxyphenanthrene, 3-hydroxybenz(a)pyrene. The metabolites concentrations were found to be high, ranging from 13 to 177 µg g-1, however they were similar to worldwide regions under high anthropogenic input. Besides the metabolites established by the used protocol, it was possible to verified high concentrations of three other compounds not yet reported in the literature. They were related to pyrolytic PAH contribution to Guanabara Bay aquatic biota: 1-hydroxypyrine and 3-hydroxybenz(a)pyrine isomers
Resumo:
It bet on the next generation of computers as architecture with multiple processors and/or multicore processors. In this sense there are challenges related to features interconnection, operating frequency, the area on chip, power dissipation, performance and programmability. The mechanism of interconnection and communication it was considered ideal for this type of architecture are the networks-on-chip, due its scalability, reusability and intrinsic parallelism. The networks-on-chip communication is accomplished by transmitting packets that carry data and instructions that represent requests and responses between the processing elements interconnected by the network. The transmission of packets is accomplished as in a pipeline between the routers in the network, from source to destination of the communication, even allowing simultaneous communications between pairs of different sources and destinations. From this fact, it is proposed to transform the entire infrastructure communication of network-on-chip, using the routing mechanisms, arbitration and storage, in a parallel processing system for high performance. In this proposal, the packages are formed by instructions and data that represent the applications, which are executed on routers as well as they are transmitted, using the pipeline and parallel communication transmissions. In contrast, traditional processors are not used, but only single cores that control the access to memory. An implementation of this idea is called IPNoSys (Integrated Processing NoC System), which has an own programming model and a routing algorithm that guarantees the execution of all instructions in the packets, preventing situations of deadlock, livelock and starvation. This architecture provides mechanisms for input and output, interruption and operating system support. As proof of concept was developed a programming environment and a simulator for this architecture in SystemC, which allows configuration of various parameters and to obtain several results to evaluate it
Resumo:
This dissertation aims at extending the JCircus tool, a translator of formal specifications into code that receives a Circus specification as input, and translates the specification into Java code. Circus is a formal language whose syntax is based on Z s and CSP s syntax. JCircus generated code uses JCSP, which is a Java API that implements CSP primitives. As JCSP does not implement all CSP s primitives, the translation strategy from Circus to Java is not trivial. Some CSP primitives, like parallelism, external choice, communication and multi-synchronization are partially implemented. As an aditional scope, this dissertation will also develop a tool for testing JCSP programs, called JCSPUnit, which will also be included in JCircus new version. The extended version of JCircus will be called JCircus 2.0.
Resumo:
This research aims at developing a variable structure adaptive backstepping controller (VS-ABC) by using state observers for SISO (Single Input Single Output), linear and time invariant systems with relative degree one. Therefore, the lters were replaced by a Luenberger Adaptive Observer and the control algorithm uses switching laws. The presented simulations compare the controller performance, considering when the state variables are estimated by an observer, with the case that the variables are available for measurement. Even with numerous performance advantages, adaptive backstepping controllers still have very complex algorithms, especially when the system state variables are not measured, since the use of lters on the plant input and output is not something trivial. As an attempt to make the controller design more intuitive, an adaptive observer as an alternative to commonly used K lters can be used. Furthermore, since the states variables are considered known, the controller has a reduction on the dependence of the unknown plant parameters on the design. Also, switching laws could be used in the controller instead of the traditional integral adaptive laws because they improve the system transient performance and increase the robustness against external disturbances in the plant input