834 resultados para Measurement based model identification
Resumo:
Active appearance model (AAM) is a powerful generative method for modeling deformable objects. The model decouples the shape and the texture variations of objects, which is followed by an efficient gradient-based model fitting method. Due to the flexible and simple framework, AAM has been widely applied in the fields of computer vision. However, difficulties are met when it is applied to various practical issues, which lead to a lot of prominent improvements to the model. Nevertheless, these difficulties and improvements have not been studied systematically. This motivates us to review the recent advances of AAM. This paper focuses on the improvements in the literature in turns of the problems suffered by AAM in practical applications. Therefore, these algorithms are summarized from three aspects, i.e., efficiency, discrimination, and robustness. Additionally, some applications and implementations of AAM are also enumerated. The main purpose of this paper is to serve as a guide for further research.
Resumo:
以无人机天际线识别为背景,提出了一种准确、实时的天际线识别算法,并由此估计姿态角。在结合实际情况的基础上,对天际线建立能量泛函模型,利用变分原理推出相应偏微分方程。在实际应用中出于对实时性的考虑,引入直线约束对该模型进行简化,然后利用由粗到精的思想识别天际线。首先,对图像预处理并垂直剖分,然后利用简化的水平直线模型对天际线进行粗识别,通过拟合获得天际线粗识别结果,最后在基于梯度和区域混合开曲线模型约束下精确识别天际线,并由此估计无人机滚动和俯仰姿态角。实验结果表明,该算法对天际线识别具有较好的鲁棒性、准确性和实时性。
Resumo:
本文针对旋翼飞行机器人全包线机动飞行中的驱动器滞后以及动力学模型时变的问题,提出了应对不确定性动力学模型的基于模型差分析的增量平稳预测控制方法。该方法首先通过建立增量平稳预测过程模型来应对驱动器输出滞后与稳态模型以及系统工作点的不确定性,并提升控制系统鲁棒性。然后通过自适应集员滤波器在线估计系统瞬态动力学与名义模型的偏差来补偿全包线飞行中时变模型对于名义控制器跟踪性能的影响。最后,通过实际的飞行试验验证了此方法能够有效的解决全包线飞行中航向与垂向的驱动器滞后与动力学时变问题,并且可以实用于旋翼机器人航向与垂向的全包线自主飞行控制。
Resumo:
针对一类非线性系统,提出了一种神经网络模型参考控制方案。在训练实现对象模型的网络和实现控制器的网络时,由状态方程产生训练样本。通过对倒立摆系统的仿真实验验证了控制方案和样本生成策略的有效性,在仿真实验中用不同初始状态验证了训练后的神经网络的泛化能力。
Resumo:
本文将随机系统状态模型辨识技术用于电力系统负荷预报。首先根据负荷的一系列历史数据建立负荷的状态空间模型,然后用滤波算法进行次日负荷预报,最后用电网实际数据在 PDP-11/23计算机上进行预报计算,得到比较满意的结果。
Resumo:
According to the basic geologic conditions, the paper is directed by the modem oil-gas accumulation theory and petroleum system in which typical oil pools are analyzed and the shape of lithologic trap and geologic factors are pointed out. The process during which oil and gas migrate from source rock to lithologic trap is rebuilt, and the accumulation model of oil pool is set up. With the comprehensive application of seismic geologic and log data and paying attention to the method and technology which is used to distinguish lithologic accumulation. Promising structural-lithofacies zones are got and the distribution rule of various lithologic accumulation is concluded. With making use of the biologic mark compound, different reservoirs are compared. As a result, the oil and gas in HeiDimiao come from Nenjiang Group's source rocks; in SaErTu from QingShenkou Group's and Nenjiang Group's, and in PuTaohua. GaoTaizi and FuYang from QingShankou Group's. According to the development and distribution of effective source rock, oil distribution and the comparison in the south of SongLiao basin, the characteristic of basin structure and reservoir distribution is considered, and then the middle-upper reservoir of SongLiao basin south are divided into two petroleum system and a complex petroleum system. Because of the characteristic of migration and accumulation, two petroleum systems can furtherly be divided into 6-7 sub-petroleum systems,20 sub-petroleum systems in all. As a result of the difference of the migration characteristic, accumulation conditions and the place in the petroleum system, the accumulation degree and accumulation model are different. So three accumulation mechanism and six basic accumulation model of lithologic trap are concluded. The distribution of lithologic pools is highly regular oil and gas around the generation sag distribute on favorable structural-lithofacies zones, the type of lithological pool vary regularly from the core of sandstone block to the upper zone. On the basic of regional structure and sedimentary evolution, main factors which control the form of trap are discovered, and it is the critical factor method which is used to discern the lithologic trap. After lots of exploration, 700km~2 potential trap is distinguished and 18391.86 * 10~4 tons geologic reserves is calculated. Oil-water distribution rule of pinch-out oil pool is put up on plane which is the reservoirs can be divided into four sections. This paper presented the law of distribution of oil and water in updip pinch-out reservoir, that is, hydrocarbon-bearing formation in plane can be divided into four zones: bottom edge water zone, underside oil and water zone, middle pure oil zone and above residual water zone. The site of the first well should be assigned to be middle or above pure oil zone, thus the exploration value of this type of reservoir can be recognized correctly. In accordance with the characteristics of seism and geology of low permeability thin sandstone and mudstone alternation layer, the paper applied a set of reservoir prediction technology, that is: (1)seism multi-parameter model identification; (2) using stratum's absorbing and depleting information to predict reservoir's abnormal hydrocarbon-bearing range. With the analysis of the residual resource potential and the research of two petroleum system and the accumulation model, promising objective zones are predicted scientifically. And main exploration aim is the DaRngZi bore in the west of ChangLin basin, and YingTai-SiFangZi middle-upper assembly in Honggang terrace.
Resumo:
This thesis presents a learning based approach for detecting classes of objects and patterns with variable image appearance but highly predictable image boundaries. It consists of two parts. In part one, we introduce our object and pattern detection approach using a concrete human face detection example. The approach first builds a distribution-based model of the target pattern class in an appropriate feature space to describe the target's variable image appearance. It then learns from examples a similarity measure for matching new patterns against the distribution-based target model. The approach makes few assumptions about the target pattern class and should therefore be fairly general, as long as the target class has predictable image boundaries. Because our object and pattern detection approach is very much learning-based, how well a system eventually performs depends heavily on the quality of training examples it receives. The second part of this thesis looks at how one can select high quality examples for function approximation learning tasks. We propose an {em active learning} formulation for function approximation, and show for three specific approximation function classes, that the active example selection strategy learns its target with fewer data samples than random sampling. We then simplify the original active learning formulation, and show how it leads to a tractable example selection paradigm, suitable for use in many object and pattern detection problems.
Resumo:
Conjugative plasmids play a vital role in bacterial adaptation through horizontal gene transfer. Explaining how plasmids persist in host populations however is difficult, given the high costs often associated with plasmid carriage. Compensatory evolution to ameliorate this cost can rescue plasmids from extinction. In a recently published study we showed that compensatory evolution repeatedly targeted the same bacterial regulatory system, GacA/GacS, in populations of plasmid-carrying bacteria evolving across a range of selective environments. Mutations in these genes arose rapidly and completely eliminated the cost of plasmid carriage. Here we extend our analysis using an individual based model to explore the dynamics of compensatory evolution in this system. We show that mutations which ameliorate the cost of plasmid carriage can prevent both the loss of plasmids from the population and the fixation of accessory traits on the bacterial chromosome. We discuss how dependent the outcome of compensatory evolution is on the strength and availability of such mutations and the rate at which beneficial accessory traits integrate on the host chromosome.
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
Recent measurement based studies reveal that most of the Internet connections are short in terms of the amount of traffic they carry (mice), while a small fraction of the connections are carrying a large portion of the traffic (elephants). A careful study of the TCP protocol shows that without help from an Active Queue Management (AQM) policy, short connections tend to lose to long connections in their competition for bandwidth. This is because short connections do not gain detailed knowledge of the network state, and therefore they are doomed to be less competitive due to the conservative nature of the TCP congestion control algorithm. Inspired by the Differentiated Services (Diffserv) architecture, we propose to give preferential treatment to short connections inside the bottleneck queue, so that short connections experience less packet drop rate than long connections. This is done by employing the RIO (RED with In and Out) queue management policy which uses different drop functions for different classes of traffic. Our simulation results show that: (1) in a highly loaded network, preferential treatment is necessary to provide short TCP connections with better response time and fairness without hurting the performance of long TCP connections; (2) the proposed scheme still delivers packets in FIFO manner at each link, thus it maintains statistical multiplexing gain and does not misorder packets; (3) choosing a smaller default initial timeout value for TCP can help enhance the performance of short TCP flows, however not as effectively as our scheme and at the risk of congestion collapse; (4) in the worst case, our proposal works as well as a regular RED scheme, in terms of response time and goodput.
Resumo:
This thesis describes work carried out on the design of new routes to a range of bisindolylmaleimide and indolo[2,3-a]carbazole analogs, and investigation of their potential as successful anti-cancer agents. Following initial investigation of classical routes to indolo[2,3-a]pyrrolo[3,4-c]carbazole aglycons, a new strategy employing base-mediated condensation of thiourea and guanidine with a bisindolyl β-ketoester intermediate afforded novel 5,6-bisindolylpyrimidin-4(3H)-ones in moderate yields. Chemical diversity within this H-bonding scaffold was then studied by substitution with a panel of biologically relevant electrophiles, and by reductive desulfurisation. Optimisation of difficult heterogeneous literature conditions for oxidative desulfurisation of thiouracils was also accomplished, enabling a mild route to a novel 5,6-bisindolyluracil pharmacophore to be developed within this work. The oxidative cyclisation of selected acyclic bisindolyl systems to form a new planar class of indolo[2,3-a]pyrimido[5,4-c]carbazoles was also investigated. Successful conditions for this transformation, as well as the limitations currently prevailing for this approach are discussed. Synthesis of 3,4-bisindolyl-5-aminopyrazole as a potential isostere of bisindolylmaleimide agents was encountered, along with a comprehensive derivatisation study, in order to probe the chemical space for potential protein backbone H-bonding interactions. Synthesis of a related 3,4-arylindolyl-5-aminopyrazole series was also undertaken, based on identification of potent kinase inhibition within a closely related heterocyclic template. Following synthesis of approximately 50 novel compounds with a diversity of H-bonding enzyme-interacting potential within these classes, biological studies confirmed that significant topo II inhibition was present for 9 lead compounds, in previously unseen pyrazolo[1,5-a]pyrimidine, indolo[2,3-c]carbazole and branched S,N-disubstituted thiouracil derivative series. NCI-60 cancer cell line growth inhibition data for 6 representative compounds also revealed interesting selectivity differences between each compound class, while a new pyrimido[5,4-c]carbazole agent strongly inhibited cancer cell division at 10 µM, with appreciable cytotoxic activity observed across several tumour types.
Resumo:
This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.
Resumo:
There have been few genuine success stories about industrial use of formal methods. Perhaps the best known and most celebrated is the use of Z by IBM (in collaboration with Oxford University's Programming Research Group) during the development of CICS/ESA (version 3.1). This work was rewarded with the prestigious Queen's Award for Technological Achievement in 1992 and is especially notable for two reasons: 1) because it is a commercial, rather than safety- or security-critical, system and 2) because the claims made about the effectiveness of Z are quantitative as well as qualitative. The most widely publicized claims are: less than half the normal number of customer-reported errors and a 9% savings in the total development costs of the release. This paper provides an independent assessment of the effectiveness of using Z on CICS based on the set of public domain documents. Using this evidence, we believe that the case study was important and valuable, but that the quantitative claims have not been substantiated. The intellectual arguments and rationale for formal methods are attractive, but their widespread commercial use is ultimately dependent upon more convincing quantitative demonstrations of effectiveness. Despite the pioneering efforts of IBM and PRG, there is still a need for rigorous, measurement-based case studies to assess when and how the methods are most effective. We describe how future similar case studies could be improved so that the results are more rigorous and conclusive.
Resumo:
Here we describe a new trait-based model for cellular resource allocation that we use to investigate the relative importance of different drivers for small cell size in phytoplankton. Using the model, we show that increased investment in nonscalable structural components with decreasing cell size leads to a trade-off between cell size, nutrient and light affinity, and growth rate. Within the most extreme nutrient-limited, stratified environments, resource competition theory then predicts a trend toward larger minimum cell size with increasing depth. We demonstrate that this explains observed trends using a marine ecosystem model that represents selection and adaptation of a diverse community defined by traits for cell size and subcellular resource allocation. This framework for linking cellular physiology to environmental selection can be used to investigate the adaptive response of the marine microbial community to environmental conditions and the adaptive value of variations in cellular physiology.
Resumo:
Chlorophyll-a satellite products are routinely used in oceanography, providing a synoptic and global view of phytoplankton abundance. However, these products lack information on the community structure of the phytoplankton, which is crucial for ecological modelling and ecosystem studies. To assess the usefulness of existing methods to differentiate phytoplankton functional types (PFT) or phytoplankton size classes from satellite data, in-situ phytoplankton samples collected in the Western Iberian coast, on the North-East Atlantic, were analysed for pigments and absorption spectra. Water samples were collected in five different locations, four of which were located near the shore and another in an open-ocean, seamount region. Three different modelling approaches for deriving phytoplankton size classes were applied to the in situ data. Approaches tested provide phytoplankton size class information based on the input of pigments data (Brewin et al., 2010), absorption spectra data (Ciotti et al., 2002) or both (Uitz et al., 2008). Following Uitz et al. (2008), results revealed high variability in microphytoplankton chlorophyll-specific absorption coefficients, ranging from 0.01 to 0.09 m2 (mg chl)− 1 between 400 and 500 nm. This spectral analysis suggested, in one of the regions, the existence of small cells (< 20 μm) in the fraction of phytoplankton presumed to be microphytoplankton (based on diagnostic pigments). Ciotti et al. (2002) approach yielded the highest differences between modelled and measured absorption spectra for the locations where samples had high variability in community structure and cell size. The Brewin et al. (2010) pigment-based model was adjusted and a set of model coefficients are presented and recommended for future studies in offshore water of the Western Iberian coast.