928 resultados para Boolean Networks Complexity Measures Automatic Design Robot Dynamics
Resumo:
This paper generates and organizes stylized facts related to the dynamics of selfemployment activities in Brazil. The final purpose is to help the design of policies to assist micro-entrepreneurial units. The 'first part of the paper uses as a main tool of analysis transitional data constructed from household surveys. The longitudinal information used covers three transition horizons: 1-month, 12-month and 5-year periods. Quantitative flows analysis assesses the main origins, destinies and various types of risks assumed by microentrepreneurial activities. Complementarily, logistic regressions provides evidence on the main characteristics and resources of micro-entrepreneurial units. In particular, we use the movements from self-employment to employer activities as measures of entrepreneurial success. We also use these transitions as measures of employment creation intensity within the self-employed segment.The second part of the paper explores various data sources. First, we attempt to analyze the life-cycle trajectories and determinants of self-employment. We use cohort data constructed from PME and qualitative data on financial and work history factors related to the opening of small bussiness from the informal firms survey implemented during 1994. Second, we apply a standart Mincerian wage equation approach to self-employment profits. This exerci se attempts to capture the correlation patterns between micro-entrepreneurial performance and a variety of firms leveI variables present in the 1994 Informal Survey. Finally, we use a a survey on the poor enterpreneurs of Rocinha favela as a laboratory to study poor entrepreneurs resources and behavior.In sum, the main questions pursued in the paper are: i) who are the Brazilian selfemployed?; ii) in particular: what is relative importance among the self-employed of subsistence activities versus those activities with growth and capital accumulation potential?; iii) what are the main static and dynamic determinants ofmicro-entrepreneurial success?; iv) what is the degree ofrisk associated with micro-entrepreneurial activities in Brazil?; v) What is the life-cycle profile of self-employment?; vi) what are the main constraints on poor entrepreneurs activities?.
Resumo:
This document represents a doctoral thesis held under the Brazilian School of Public and Business Administration of Getulio Vargas Foundation (EBAPE/FGV), developed through the elaboration of three articles. The research that resulted in the articles is within the scope of the project entitled “Windows of opportunities and knowledge networks: implications for catch-up in developing countries”, funded by Support Programme for Research and Academic Production of Faculty (ProPesquisa) of Brazilian School of Public and Business Administration (EBAPE) of Getulio Vargas Foundation.
Resumo:
In the last decade mobile wireless communications have witnessed an explosive growth in the user’s penetration rate and their widespread deployment around the globe. It is expected that this tendency will continue to increase with the convergence of fixed Internet wired networks with mobile ones and with the evolution to the full IP architecture paradigm. Therefore mobile wireless communications will be of paramount importance on the development of the information society of the near future. In particular a research topic of particular relevance in telecommunications nowadays is related to the design and implementation of mobile communication systems of 4th generation. 4G networks will be characterized by the support of multiple radio access technologies in a core network fully compliant with the Internet Protocol (all IP paradigm). Such networks will sustain the stringent quality of service (QoS) requirements and the expected high data rates from the type of multimedia applications to be available in the near future. The approach followed in the design and implementation of the mobile wireless networks of current generation (2G and 3G) has been the stratification of the architecture into a communication protocol model composed by a set of layers, in which each one encompasses some set of functionalities. In such protocol layered model, communications is only allowed between adjacent layers and through specific interface service points. This modular concept eases the implementation of new functionalities as the behaviour of each layer in the protocol stack is not affected by the others. However, the fact that lower layers in the protocol stack model do not utilize information available from upper layers, and vice versa, downgrades the performance achieved. This is particularly relevant if multiple antenna systems, in a MIMO (Multiple Input Multiple Output) configuration, are implemented. MIMO schemes introduce another degree of freedom for radio resource allocation: the space domain. Contrary to the time and frequency domains, radio resources mapped into the spatial domain cannot be assumed as completely orthogonal, due to the amount of interference resulting from users transmitting in the same frequency sub-channel and/or time slots but in different spatial beams. Therefore, the availability of information regarding the state of radio resources, from lower to upper layers, is of fundamental importance in the prosecution of the levels of QoS expected from those multimedia applications. In order to match applications requirements and the constraints of the mobile radio channel, in the last few years researches have proposed a new paradigm for the layered architecture for communications: the cross-layer design framework. In a general way, the cross-layer design paradigm refers to a protocol design in which the dependence between protocol layers is actively exploited, by breaking out the stringent rules which restrict the communication only between adjacent layers in the original reference model, and allowing direct interaction among different layers of the stack. An efficient management of the set of available radio resources demand for the implementation of efficient and low complexity packet schedulers which prioritize user’s transmissions according to inputs provided from lower as well as upper layers in the protocol stack, fully compliant with the cross-layer design paradigm. Specifically, efficiently designed packet schedulers for 4G networks should result in the maximization of the capacity available, through the consideration of the limitations imposed by the mobile radio channel and comply with the set of QoS requirements from the application layer. IEEE 802.16e standard, also named as Mobile WiMAX, seems to comply with the specifications of 4G mobile networks. The scalable architecture, low cost implementation and high data throughput, enable efficient data multiplexing and low data latency, which are attributes essential to enable broadband data services. Also, the connection oriented approach of Its medium access layer is fully compliant with the quality of service demands from such applications. Therefore, Mobile WiMAX seems to be a promising 4G mobile wireless networks candidate. In this thesis it is proposed the investigation, design and implementation of packet scheduling algorithms for the efficient management of the set of available radio resources, in time, frequency and spatial domains of the Mobile WiMAX networks. The proposed algorithms combine input metrics from physical layer and QoS requirements from upper layers, according to the crosslayer design paradigm. Proposed schedulers are evaluated by means of system level simulations, conducted in a system level simulation platform implementing the physical and medium access control layers of the IEEE802.16e standard.
Resumo:
ln this work the implementation of the SOM (Self Organizing Maps) algorithm or Kohonen neural network is presented in the form of hierarchical structures, applied to the compression of images. The main objective of this approach is to develop an Hierarchical SOM algorithm with static structure and another one with dynamic structure to generate codebooks (books of codes) in the process of the image Vector Quantization (VQ), reducing the time of processing and obtaining a good rate of compression of images with a minimum degradation of the quality in relation to the original image. Both self-organizing neural networks developed here, were denominated HSOM, for static case, and DHSOM, for the dynamic case. ln the first form, the hierarchical structure is previously defined and in the later this structure grows in an automatic way in agreement with heuristic rules that explore the data of the training group without use of external parameters. For the network, the heuristic mIes determine the dynamics of growth, the pruning of ramifications criteria, the flexibility and the size of children maps. The LBO (Linde-Buzo-Oray) algorithm or K-means, one ofthe more used algorithms to develop codebook for Vector Quantization, was used together with the algorithm of Kohonen in its basic form, that is, not hierarchical, as a reference to compare the performance of the algorithms here proposed. A performance analysis between the two hierarchical structures is also accomplished in this work. The efficiency of the proposed processing is verified by the reduction in the complexity computational compared to the traditional algorithms, as well as, through the quantitative analysis of the images reconstructed in function of the parameters: (PSNR) peak signal-to-noise ratio and (MSE) medium squared error
Resumo:
This work shows the design, simulation, and analysis of two optical interconnection networks for a Dataflow parallel computer architecture. To verify the optical interconnection network performance on the Dataflow architecture, we have analyzed the load balancing among the processors during the parallel programs executions. The load balancing is a very important parameter because it is directly associated to the dataflow parallelism degree. This article proves that optical interconnection networks designed with simple optical devices can provide efficiently the dataflow requirements of a high performance communication system.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this work, we analyzed a bifurcational behavior of a longitudinal flight nonlinear dynamics, taking as an example the F-8 aircraft Crusader. We deal with an analysis of high angles of attack in order to stabilize the oscillations; those were close to the critical angle of the aircraft, in the flight conditions, established. We proposed a linear optimal control design applied to the considered nonlinear aircraft model below angle of stall, taking into account regions of Hopf and saddled noddle bifurcations.
Resumo:
Feed-forward neural networks (FFNNs) were used to predict the skeletal type of molecules belonging to six classes of terpenoids. A database that contains the (13)C NMR spectra of about 5000 compounds was used to train the FFNNs. An efficient representation of the spectra was designed and the constitution of the best FFNN input vector format resorted from an heuristic approach. The latter was derived from general considerations on terpenoid structures. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.
Resumo:
The Fitzhugh-Nagumo (fn) mathematical model characterizes the action potential of the membrane. The dynamics of the Fitzhugh-Nagumo model have been extensively studied both with a view to their biological implications and as a test bed for numerical methods, which can be applied to more complex models. This paper deals with the dynamics in the (FH) model. Here, the dynamics are analyzed, qualitatively, through the stability diagrams to the action potential of the membrane. Furthermore, we also analyze quantitatively the problem through the evaluation of Floquet multipliers. Finally, the nonlinear periodic problem is controlled, based on the Chebyshev polynomial expansion, the Picard iterative method and on Lyapunov-Floquet transformation (L-F transformation).
Resumo:
We have recently proposed an extension to Petri nets in order to be able to directly deal with all aspects of embedded digital systems. This extension is meant to be used as an internal model of our co-design environment. After analyzing relevant related work, and presenting a short introduction to our extension as a background material, we describe the details of the timing model we use in our approach, which is mainly based in Merlin's time model. We conclude the paper by discussing an example of its usage. © 2004 IEEE.
Resumo:
Severely disabled children have little chance of environmental and social exploration and discovery. This lack of interaction and independency may lead to an idea that they are unable to do anything by themselves. In an attempt to help children in this situation, educational robotics can offer and aid, once it can provide them a certain degree of independency in the exploration of environment. The system developed in this work allows the child to transmit the commands to a robot through myoelectric and movement sensors. The sensors are placed on the child's body so they can obtain information from the body inclination and muscle contraction, thus allowing commanding, through a wireless communication, the mobile entertainment robot to carry out tasks such as play with objects and draw. In this paper, the details of the robot design and control architecture are presented and discussed. With this system, disabled children get a better cognitive development and social interaction, balancing in a certain way, the negative effects of their disabilities. © 2012 IEEE.
Resumo:
Includes bibliography
Resumo:
Purpose - The purpose of this paper is twofold: to analyze the computational complexity of the cogeneration design problem; to present an expert system to solve the proposed problem, comparing such an approach with the traditional searching methods available.Design/methodology/approach - The complexity of the cogeneration problem is analyzed through the transformation of the well-known knapsack problem. Both problems are formulated as decision problems and it is proven that the cogeneration problem is np-complete. Thus, several searching approaches, such as population heuristics and dynamic programming, could be used to solve the problem. Alternatively, a knowledge-based approach is proposed by presenting an expert system and its knowledge representation scheme.Findings - The expert system is executed considering two case-studies. First, a cogeneration plant should meet power, steam, chilled water and hot water demands. The expert system presented two different solutions based on high complexity thermodynamic cycles. In the second case-study the plant should meet just power and steam demands. The system presents three different solutions, and one of them was never considered before by our consultant expert.Originality/value - The expert system approach is not a "blind" method, i.e. it generates solutions based on actual engineering knowledge instead of the searching strategies from traditional methods. It means that the system is able to explain its choices, making available the design rationale for each solution. This is the main advantage of the expert system approach over the traditional search methods. On the other hand, the expert system quite likely does not provide an actual optimal solution. All it can provide is one or more acceptable solutions.