866 resultados para Structure-based model
Resumo:
本文针对旋翼飞行机器人全包线机动飞行中的驱动器滞后以及动力学模型时变的问题,提出了应对不确定性动力学模型的基于模型差分析的增量平稳预测控制方法。该方法首先通过建立增量平稳预测过程模型来应对驱动器输出滞后与稳态模型以及系统工作点的不确定性,并提升控制系统鲁棒性。然后通过自适应集员滤波器在线估计系统瞬态动力学与名义模型的偏差来补偿全包线飞行中时变模型对于名义控制器跟踪性能的影响。最后,通过实际的飞行试验验证了此方法能够有效的解决全包线飞行中航向与垂向的驱动器滞后与动力学时变问题,并且可以实用于旋翼机器人航向与垂向的全包线自主飞行控制。
Resumo:
针对一类非线性系统,提出了一种神经网络模型参考控制方案。在训练实现对象模型的网络和实现控制器的网络时,由状态方程产生训练样本。通过对倒立摆系统的仿真实验验证了控制方案和样本生成策略的有效性,在仿真实验中用不同初始状态验证了训练后的神经网络的泛化能力。
Resumo:
The relationship between the management and the culture was explored from the view of social norm's theory. In concrete terms, the differences of the hierarchical structure of the social norm among the Chinese, the Japanese and the American were studied systematically by using interview, case study, questionnaire survey and the structure equation model, etc. The results were: (1) The basic two types of the social norms of the Chinese, the American and the Japanese were the same: the external control norm and the internal control norm. The basic dimensions of the two types of norms composed of moral principle, value orientation, the law and the rules, and the social custom were consistent among the three countries. Furthermore, the dimensions of social norms were hierarchical because of the functioning of the different culture, which consisted of the hierarchical structure system. (2) Although there were the same dimensions among the three countries, the contents of these dimensions had both the common norms surpassing the specific culture and the particular norms depending on the specific culture. (3) The basic structures of the social norms in China and in Japan were the same: the internal control norms played a main role and the external control norm was auxiliary. On one hand, within the internal norm of the Chinese, the moral principle was the main force while the value orientation was the supplementary; within the external norm, the law and the rules was the main force while the social culture custom was supplementary. On the other hand, the relationship between the external and the internal dimensions of the Japanese turned out to be contrary to those of the Chinese. (4) The structure of the American social norms were different from the Chinese: for the American, the external control norm played a main role while the internal control norm was assistant. Furthermore, the law and the rule was the major aspect while the social costumes was the second in the external control dimension. In addition, the value orientation led the performance style of the American, while the moral principle played the second role in the internal control structure. (5) The social norms related to the management performance were found including work responsibility, organization commitment, meeting making-decision, communication style, work duty and interpersonal conflict by inventory and case study. The mangers from China, Japan and America had significant different views on paying attention to the management norms. In a word, the culture differences of the social norm were the fundamental reason of the management conflict.
Resumo:
This thesis presents a learning based approach for detecting classes of objects and patterns with variable image appearance but highly predictable image boundaries. It consists of two parts. In part one, we introduce our object and pattern detection approach using a concrete human face detection example. The approach first builds a distribution-based model of the target pattern class in an appropriate feature space to describe the target's variable image appearance. It then learns from examples a similarity measure for matching new patterns against the distribution-based target model. The approach makes few assumptions about the target pattern class and should therefore be fairly general, as long as the target class has predictable image boundaries. Because our object and pattern detection approach is very much learning-based, how well a system eventually performs depends heavily on the quality of training examples it receives. The second part of this thesis looks at how one can select high quality examples for function approximation learning tasks. We propose an {em active learning} formulation for function approximation, and show for three specific approximation function classes, that the active example selection strategy learns its target with fewer data samples than random sampling. We then simplify the original active learning formulation, and show how it leads to a tractable example selection paradigm, suitable for use in many object and pattern detection problems.
Resumo:
This paper describes ARLO, a representation language loosely modelled after Greiner and Lenant's RLL-1. ARLO is a structure-based representation language for describing structure-based representation languages, including itself. A given representation language is specified in ARLO by a collection of structures describing how its descriptions are interpreted, defaulted, and verified. This high level description is compiles into lisp code and ARLO structures whose interpretation fulfills the specified semantics of the representation. In addition, ARLO itself- as a representation language for expressing and compiling partial and complete language specifications- is described and interpreted in the same manner as the language it describes and implements. This self-description can be extended of modified to expand or alter the expressive power of ARLO's initial configuration. Languages which describe themselves like ARLO- provide powerful mediums for systems which perform automatic self-modification, optimization, debugging, or documentation. AI systems implemented in such a self-descriptive language can reflect on their own capabilities and limitations, applying general learning and problem solving strategies to enlarge or alleviate them.
Resumo:
Conjugative plasmids play a vital role in bacterial adaptation through horizontal gene transfer. Explaining how plasmids persist in host populations however is difficult, given the high costs often associated with plasmid carriage. Compensatory evolution to ameliorate this cost can rescue plasmids from extinction. In a recently published study we showed that compensatory evolution repeatedly targeted the same bacterial regulatory system, GacA/GacS, in populations of plasmid-carrying bacteria evolving across a range of selective environments. Mutations in these genes arose rapidly and completely eliminated the cost of plasmid carriage. Here we extend our analysis using an individual based model to explore the dynamics of compensatory evolution in this system. We show that mutations which ameliorate the cost of plasmid carriage can prevent both the loss of plasmids from the population and the fixation of accessory traits on the bacterial chromosome. We discuss how dependent the outcome of compensatory evolution is on the strength and availability of such mutations and the rate at which beneficial accessory traits integrate on the host chromosome.
Resumo:
Wydział Chemii
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
This paper explores reasons for the high degree of variability in the sizes of ASes that have recently been observed, and the processes by which this variable distribution develops. AS size distribution is important for a number of reasons. First, when modeling network topologies, an AS size distribution assists in labeling routers with an associated AS. Second, AS size has been found to be positively correlated with the degree of the AS (number of peering links), so understanding the distribution of AS sizes has implications for AS connectivity properties. Our model accounts for AS births, growth, and mergers. We analyze two models: one incorporates only the growth of hosts and ASes, and a second extends that model to include mergers of ASes. We show analytically that, given reasonable assumptions about the nature of mergers, the resulting size distribution exhibits a power law tail with the exponent independent of the details of the merging process. We estimate parameters of the models from measurements obtained from Internet registries and from BGP tables. We then compare the models solutions to empirical AS size distribution taken from Mercator and Skitter datasets, and find that the simple growth-based model yields general agreement with empirical data. Our analysis of the model in which mergers occur in a manner independent of the size of the merging ASes suggests that more detailed analysis of merger processes is needed.
Resumo:
Current building regulations are generally prescriptive in nature. It is widely accepted in Europe that this form of building regulation is stifling technological innovation and leading to inadequate energy efficiency in the building stock. This has increased the motivation to move design practices towards a more ‘performance-based’ model in order to mitigate inflated levels of energy-use consumed by the building stock. A performance based model assesses the interaction of all building elements and the resulting impact on holistic building energy-use. However, this is a nebulous task due to building energy-use being affected by a myriad of heterogeneous agents. Accordingly, it is imperative that appropriate methods, tools and technologies are employed for energy prediction, measurement and evaluation throughout the project’s life cycle. This research also considers that it is imperative that the data is universally accessible by all stakeholders. The use of a centrally based product model for exchange of building information is explored. This research describes the development and implementation of a new building energy-use performance assessment methodology. Termed the Building Effectiveness Communications ratios (BECs) methodology, this performance-based framework is capable of translating complex definitions of sustainability for energy efficiency and depicting universally understandable views at all stage of the Building Life Cycle (BLC) to the project’s stakeholders. The enabling yardsticks of building energy-use performance, termed Ir and Pr, provide continuous design and operations feedback in order to aid the building’s decision makers. Utilised effectively, the methodology is capable of delivering quality assurance throughout the BLC by providing project teams with quantitative measurement of energy efficiency. Armed with these superior enabling tools for project stakeholder communication, it is envisaged that project teams will be better placed to augment a knowledge base and generate more efficient additions to the building stock.
Resumo:
We propose and experimentally validate a first-principles based model for the nonlinear piezoelectric response of an electroelastic energy harvester. The analysis herein highlights the importance of modeling inherent piezoelectric nonlinearities that are not limited to higher order elastic effects but also include nonlinear coupling to a power harvesting circuit. Furthermore, a nonlinear damping mechanism is shown to accurately restrict the amplitude and bandwidth of the frequency response. The linear piezoelectric modeling framework widely accepted for theoretical investigations is demonstrated to be a weak presumption for near-resonant excitation amplitudes as low as 0.5 g in a prefabricated bimorph whose oscillation amplitudes remain geometrically linear for the full range of experimental tests performed (never exceeding 0.25% of the cantilever overhang length). Nonlinear coefficients are identified via a nonlinear least-squares optimization algorithm that utilizes an approximate analytic solution obtained by the method of harmonic balance. For lead zirconate titanate (PZT-5H), we obtained a fourth order elastic tensor component of c1111p =-3.6673× 1017 N/m2 and a fourth order electroelastic tensor value of e3111 =1.7212× 108 m/V. © 2010 American Institute of Physics.
Resumo:
Proteins are essential components of cells and are crucial for catalyzing reactions, signaling, recognition, motility, recycling, and structural stability. This diversity of function suggests that nature is only scratching the surface of protein functional space. Protein function is determined by structure, which in turn is determined predominantly by amino acid sequence. Protein design aims to explore protein sequence and conformational space to design novel proteins with new or improved function. The vast number of possible protein sequences makes exploring the space a challenging problem.
Computational structure-based protein design (CSPD) allows for the rational design of proteins. Because of the large search space, CSPD methods must balance search accuracy and modeling simplifications. We have developed algorithms that allow for the accurate and efficient search of protein conformational space. Specifically, we focus on algorithms that maintain provability, account for protein flexibility, and use ensemble-based rankings. We present several novel algorithms for incorporating improved flexibility into CSPD with continuous rotamers. We applied these algorithms to two biomedically important design problems. We designed peptide inhibitors of the cystic fibrosis agonist CAL that were able to restore function of the vital cystic fibrosis protein CFTR. We also designed improved HIV antibodies and nanobodies to combat HIV infections.
Resumo:
All biological phenomena depend on molecular recognition, which is either intermolecular like in ligand binding to a macromolecule or intramolecular like in protein folding. As a result, understanding the relationship between the structure of proteins and the energetics of their stability and binding with others (bio)molecules is a very interesting point in biochemistry and biotechnology. It is essential to the engineering of stable proteins and to the structure-based design of pharmaceutical ligands. The parameter generally used to characterize the stability of a system (the folded and unfolded state of the protein for example) is the equilibrium constant (K) or the free energy (deltaG(o)), which is the sum of enthalpic (deltaH(o)) and entropic (deltaS(o)) terms. These parameters are temperature dependent through the heat capacity change (deltaCp). The thermodynamic parameters deltaH(o) and deltaCp can be derived from spectroscopic experiments, using the van't Hoff method, or measured directly using calorimetry. Along with isothermal titration calorimetry (ITC), differential scanning calorimetry (DSC) is a powerful method, less described than ITC, for measuring directly the thermodynamic parameters which characterize biomolecules. In this article, we summarize the principal thermodynamics parameters, describe the DSC approach and review some systems to which it has been applied. DSC is much used for the study of the stability and the folding of biomolecules, but it can also be applied in order to understand biomolecular interactions and can thus be an interesting technique in the process of drug design.
Resumo:
There is concern in the Cross-Channel region of Nord-Pas-de-Calais (France) and Kent (Great Britain), regarding the extent of atmospheric pollution detected in the area from emitted gaseous (VOC, NOx, S02)and particulate substances. In particular, the air quality of the Cross-Channel or "Trans-Manche" region is highly affected by the heavily industrial area of Dunkerque, in addition to transportation sources linked to cross-channel traffic in Kent and Calais, posing threats to the environment and human health. In the framework of the cross-border EU Interreg IIIA activity, the joint Anglo-French project, ATTMA, has been commissioned to study Aerosol Transport in the Trans-Manche Atmosphere. Using ground monitoring data from UK and French networks and with the assistance of satellite images the project aims to determine dispersion patterns. and identify sources responsible for the pollutants. The findings of this study will increase awareness and have a bearing on future air quality policy in the region. Public interest is evident by the presence of local authorities on both sides of the English Channel as collaborators. The research is based on pollution transport simulations using (a) Lagrangian Particle Dispersion (LPD) models, (b) an Eulerian Receptor Based model. This paper is concerned with part (a), the LPD Models. Lagrangian Particle Dispersion (LPD) models are often used to numerically simulate the dispersion of a passive tracer in the planetary boundary layer by calculating the Lagrangian trajectories of thousands of notional particles. In this contribution, the project investigated the use of two widely used particle dispersion models: the Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model and the model FLEXPART. In both models forward tracking and inverse (or·. receptor-based) modes are possible. Certain distinct pollution episodes have been selected from the monitor database EXPER/PF and from UK monitoring stations, and their likely trajectory predicted using prevailing weather data. Global meteorological datasets were downloaded from the ECMWF MARS archive. Part of the difficulty in identifying pollution sources arises from the fact that much of the pollution outside the monitoring area. For example heightened particulate concentrations are to originate from sand storms in the Sahara, or volcanic activity in Iceland or the Caribbean work identifies such long range influences. The output of the simulations shows that there are notable differences between the formulations of and Hysplit, although both models used the same meteorological data and source input, suggesting that the identification of the primary emissions during air pollution episodes may be rather uncertain.
Resumo:
Here we describe a new trait-based model for cellular resource allocation that we use to investigate the relative importance of different drivers for small cell size in phytoplankton. Using the model, we show that increased investment in nonscalable structural components with decreasing cell size leads to a trade-off between cell size, nutrient and light affinity, and growth rate. Within the most extreme nutrient-limited, stratified environments, resource competition theory then predicts a trend toward larger minimum cell size with increasing depth. We demonstrate that this explains observed trends using a marine ecosystem model that represents selection and adaptation of a diverse community defined by traits for cell size and subcellular resource allocation. This framework for linking cellular physiology to environmental selection can be used to investigate the adaptive response of the marine microbial community to environmental conditions and the adaptive value of variations in cellular physiology.