954 resultados para Computational tools
Resumo:
This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.
Resumo:
Research macroeconomists have witnessed remarkable methodological developments in mathematical, statistical, and computational tools during the last two decades. The three essays in this dissertation took advantage of these advances to analyze important macroeconomic issues. The first essay, “ Habit Formation, Adjustments Costs, and International Business Cycle Puzzles” analyzes the extent to which incorporating habit formation and adjustment costs in investment in a one-good two-country general equilibrium model would help overcome some of the international business cycle puzzles. Unlike standard results in the literature, the model generates persistent, cyclical adjustment paths in response to shocks. It also yields positive cross-country correlations in consumption, employment, investment, and output. Cross-country correlations in output are higher than the ones in consumption. This is qualitatively consistent with the stylized facts. These results are particularly striking given the predicted negative correlations in investment, employment, and output that are typically found in the literature. The second essay, “Comparison Utility, Endogenous Time Preference, and Economic Growth,” uses World War II as a natural experiment to analyze the degree to which a model where consumers' preferences exhibit comparison-based utility and endogenous discounting is able to improve upon existing models in mimicking the transitional dynamics of an economy after a shock that destroys part of its capital stock. The model outperforms existing ones in replicating the behavior of the saving rate (both on impact and along the transient paths) after this historical event. This result brings additional support to the endogenous rate of time preference being a crucial element in growth models. The last essay, “Monetary Policy under Fear of Floating: Modeling the Dominican Economy,” presents a small scale macroeconomic model for a country (Dominican Republic) characterized by a strong presence of fear of floating (reluctance to have a flexible exchange rate regime) in the conduct of monetary policy. The dynamic responses of this economy to external shocks that are of interest for monetary policy purposes are analyzed under two alternative interest rate policy rules: One being the standard Taylor rule and another that responds explicitly to deviations of the exchange rate with respect to its long-term trend.
Resumo:
Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).
Resumo:
This paper presents a study of the integration of filters and microstrip antennas, yielding devices named as filtennas for applications in wireless communications systems. The design of these structures is given from the observation of filtennas based integration between horn antennas and frequency selective surfaces (FSS), used in the band X. The choice of microstrip line structures for the development of a new configuration filtennas justifies the wide application of these transmission lines, in recent decades, always resulting in the production of circuit structures with planar light-weight, compact size, low cost, easy to construct and particularly easy to integrate with other microwave circuits. In addition, the antenna structure considered for the composition of filtennas consists of a planar monopole microstrip to microstrip filters integrated in the feed line of the antenna. In particular, are considered elliptical monopole microstrip (operating in UWB UWB) microstrip filters and (in structures with associated sections in series and / or coupled). In addition, the monopole microstrip has a proper bandwidth and omnidirectional radiation pattern, such that its integration with microstrip filters results in decreased bandwidth, but with slight changes in the radiation pattern. The methods used in the analysis of monopoles, and filters were filtennas finite elements and moments by using commercial software Ansoft Designer and HFSS Ansoft, respectively. Specifically, we analyze the main characteristics of filtennas, such as radiation pattern, gain and bandwidth. Were designed, constructed and measures, several structures filtennas, for validation of the simulated results. Were also used computational tools (CAD) in the process of building prototypes of planar monopoles, filters and filtennas. The prototypes were constructed on substrates of glass-fiber (FR4). Measurements were performed at the Laboratory for Telecommunications UFRN. Comparisons were made between simulated and measured, and found good agreement in the cases considered
Resumo:
The integration between architectural design and structur al systems consi sts, in academic education, one of the main challenges to the architectural design education . Recent studies point to the relevance of the use of computational tools in academic settings as an important strategy for such integration. Although in recent yea rs teaching experience using BIM (BuildingInformationModeling) may be incorporated by the a rchitecture schools , notes the need for further didactic and pedagogical practices that promote the architectural design and structur al integration teaching. This pa per analyzes experiences developed within the UFRN and UFPB, seeking to identify tools, processes and products used, pointing limitations and potentials in subjects taught in these institutions. The research begins with a literature review on teaching BIM and related aspects to the integration of architectural design and stru c tur e . It has been used as data collection techniques in studio the direct observation, the use of questionnaires and interviews with students and teachers, and mixed method, qualitativ e and quantitative analysis . In UFRN, the scope of the Integrated Workshop as a compulsory subject in the curriculum, favors the integration of disciplines studied here as it allows teachers from different disciplines at the same project studio . Regarding the use of BIM form initial users, BIM modelers, able to extract quantitative and automatically speed up production, gaining in quality in the products, however learn the tool and design in parallel cause some difficulties. UFPB, lack of required courses o n BIM, generates lack of knowledge and confidence in using the tool and processes, by most students. Thus we see the need for greater efforts by school to adopt BIM skills and training. There is a greater need for both BIM concept, in order to promote BIM process and consequent better use of tools, and obsolete avoiding impairment of technology, merely a tool. It is considered the inclusion of specific subjects with more advanced BIM skills, through partnerships with engineering degrees and the promotion of trans disciplinary integration favoring the exchange of different cultures from the academic environment.
Resumo:
Human development requires a broad balance between ecological, social and economic factors in order to ensure its own sustainability. In this sense, the search for new sources of energy generation, with low deployment and operation costs, which cause the least possible impact to the environment, has been the focus of attention of all society segments. To do so, the reduction in exploration of fossil fuels and the encouragement of using renewable energy resources for distributed generation have proved interesting alternatives to the expansion of the energy matrix of various countries in the world. In this sense, the wind energy has acquired an increasingly significant role, presenting increasing rates of power grid penetration and highlighting technological innovations such as the use of permanent magnet synchronous generators (PMSG). In Brazil, this fact has also been noted and, as a result, the impact of the inclusion of this source in the distribution and sub-transmission power grid has been a major concern of utilities and agents connected to Brazilian electrical sector. Thus, it is relevant the development of appropriate computational tools that allow detailed predictive studies about the dynamic behavior of wind farms, either operating with isolated load, either connected to the main grid, taking also into account the implementation of control strategies for active/reactive power generation and the keeping of adequate levels of voltage and frequency. This work fits in this context since it comprises mathematical and computational developments of a complete wind energy conversion system (WECS) endowed with PMSG using time domain techniques of Alternative Transients Program (ATP), which prides itself a recognized reputation by scientific and academic communities as well as by electricity professionals in Brazil and elsewhere. The modeling procedures performed allowed the elaboration of blocks representing each of the elements of a real WECS, comprising the primary source (the wind), the wind turbine, the PMSG, the frequency converter, the step up transformer, the load composition and the power grid equivalent. Special attention is also given to the implementation of wind turbine control techniques, mainly the pitch control responsible for keeping the generator under the maximum power operation point, and the vector theory that aims at adjusting the active/reactive power flow between the wind turbine and the power grid. Several simulations are performed to investigate the dynamic behavior of the wind farm when subjected to different operating conditions and/or on the occurrence of wind intensity variations. The results have shown the effectiveness of both mathematical and computational modeling developed for the wind turbine and the associated controls.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.
Resumo:
LEÃO, Adriano de Castro; DÓRIA NETO, Adrião Duarte; SOUSA, Maria Bernardete Cordeiro de. New developmental stages for common marmosets (Callithrix jacchus) using mass and age variables obtained by K-means algorithm and self-organizing maps (SOM). Computers in Biology and Medicine, v. 39, p. 853-859, 2009
Resumo:
LEÃO, Adriano de Castro; DÓRIA NETO, Adrião Duarte; SOUSA, Maria Bernardete Cordeiro de. New developmental stages for common marmosets (Callithrix jacchus) using mass and age variables obtained by K-means algorithm and self-organizing maps (SOM). Computers in Biology and Medicine, v. 39, p. 853-859, 2009
Resumo:
A constante evolução da tecnologia disponibilizou, atualmente, ferramentas computacionais que eram apenas expectativas há 10 anos atrás. O aumento do potencial computacional aplicado a modelos numéricos que simulam a atmosfera permitiu ampliar o estudo de fenômenos atmosféricos, através do uso de ferramentas de computação de alto desempenho. O trabalho propôs o desenvolvimento de algoritmos com base em arquiteturas SIMT e aplicação de técnicas de paralelismo com uso da ferramenta OpenACC para processamento de dados de previsão numérica do modelo Weather Research and Forecast. Esta proposta tem forte conotação interdisciplinar, buscando a interação entre as áreas de modelagem atmosférica e computação científica. Foram testadas a influência da computação do cálculo de microfísica de nuvens na degradação temporal do modelo. Como a entrada de dados para execução na GPU não era suficientemente grande, o tempo necessário para transferir dados da CPU para a GPU foi maior do que a execução da computação na CPU. Outro fator determinante foi a adição de código CUDA dentro de um contexto MPI, causando assim condições de disputa de recursos entre os processadores, mais uma vez degradando o tempo de execução. A proposta do uso de diretivas para aplicar computação de alto desempenho em uma estrutura CUDA parece muito promissora, mas ainda precisa ser utilizada com muita cautela a fim de produzir bons resultados. A construção de um híbrido MPI + CUDA foi testada, mas os resultados não foram conclusivos.
Resumo:
Phosphorylation is amongst the most crucial and well-studied post-translational modifications. It is involved in multiple cellular processes which makes phosphorylation prediction vital for understanding protein functions. However, wet-lab techniques are labour and time intensive. Thus, computational tools are required for efficiency. This project aims to provide a novel way to predict phosphorylation sites from protein sequences by adding flexibility and Sezerman Grouping amino acid similarity measure to previous methods, as discovering new protein sequences happens at a greater rate than determining protein structures. The predictor – NOPAY - relies on Support Vector Machines (SVMs) for classification. The features include amino acid encoding, amino acid grouping, predicted secondary structure, predicted protein disorder, predicted protein flexibility, solvent accessibility, hydrophobicity and volume. As a result, we have managed to improve phosphorylation prediction accuracy for Homo sapiens by 3% and 6.1% for Mus musculus. Sensitivity at 99% specificity was also increased by 6% for Homo sapiens and for Mus musculus by 5% on independent test sets. In this study, we have managed to increase phosphorylation prediction accuracy for Homo sapiens and Mus musculus. When there is enough data, future versions of the software may also be able to predict other organisms.
Resumo:
This thesis aims to develop new numerical and computational tools to study electrochemical transport and diffuse charge dynamics at small scales. Previous efforts at modeling electrokinetic phenomena at scales where the noncontinuum effects become significant have included continuum models based on the Poisson-Nernst-Planck equations and atomic simulations using molecular dynamics algorithms. Neither of them is easy to use or conducive to electrokinetic transport modeling in strong confinement or over long time scales. This work introduces a new approach based on a Langevin equation for diffuse charge dynamics in nanofluidic devices, which incorporates features from both continuum and atomistic methods. The model is then extended to include steric effects resulting from finite ion size, and applied to the phenomenon of double layer charging in a symmetric binary electrolyte between parallel-plate blocking electrodes, between which a voltage is applied. Finally, the results of this approach are compared to those of the continuum model based on the Poisson-Nernst-Planck equations.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.
Resumo:
Assistimos hodiernamente à automatização de procedimentos nas organizações, dado à inserção das novas tecnologias, e mais em concreto das ferramentas computacionais, no ambiente organizacional. Esta automatização permite a simplificação do processo de tomada de decisão, a manipulação de dados e lidar com o excesso de informação. No seu dia-a-dia, a Guarda Nacional Republicana enfrenta uma elevada diversidade e complexidade de ocorrências, que motiva a utilização de um sistema de informação que se constitua como primeira linha da Gestão das Ocorrências, permitindo ao comandante gerir criteriosamente os seus meios no processo de tomada de decisão. Este trabalho de investigação tem como objetivo descrever de que forma pode o Sistema de Gestão das Salas de Situação influenciar a tomada de decisão. Para tal foi realizado um estudo de caso, reunindo um grupo de sete Chefes de Sala de Situação, reunindo-os num painel de especialistas para aplicar o método de Delphi, visando inferir as potencialidades do sistema e as vulnerabilidades sentidas pelos operadores. Foram também aplicados inquéritos por questionário aos operadores do sistema no sentido de compreender a sua perceção de utilidade desta ferramenta na gestão de meios. A análise de resultados permitiu verificar que o Sistema de Gestão das Salas de Situação é uma importante ferramenta na gestão de ocorrências, ao fornecer informação necessária à tomada de decisão do comandante, embora possua imperfeições que necessitam de ser mitigadas, no sentido de serem exploradas as potencialidades na sua plenitude. Conclui-se com esta investigação que o Sistema de Gestão das Salas de Situação facilita a tomada de decisão ao fornecer informações acerca das ocorrências ativas na zona de ação da sua Unidade, bem como das patrulhas disponíveis. Este sistema, deve ser complementado com a utilização do Sistema de Informação de Gestão dos meios SIRESP, para que o comandante consiga percecionar a localização das patrulhas disponíveis. Desta forma, o comandante consegue tomar decisões de maneira mais sustentada, permitindo a rápida mobilização dos recursos policiais, visando a excelência operacional e o aumento de eficácia da ação policial.