926 resultados para model complexity
Resumo:
As a comparative newly-invented PKM with over-constraints in kinematic chains, the Exechon has attracted extensive attention from the research society. Different from the well-recognized kinematics analysis, the research on the stiffness characteristics of the Exechon still remains as a challenge due to the structural complexity. In order to achieve a thorough understanding of the stiffness characteristics of the Exechon PKM, this paper proposed an analytical kinetostatic model by using the substructure synthesis technique. The whole PKM system is decomposed into a moving platform subsystem, three limb subsystems and a fixed base subsystem, which are connected to each other sequentially through corresponding joints. Each limb body is modeled as a spatial beam with a uniform cross-section constrained by two sets of lumped springs. The equilibrium equation of each individual limb assemblage is derived through finite element formulation and combined with that of the moving platform derived with Newtonian method to construct the governing kinetostatic equations of the system after introducing the deformation compatibility conditions between the moving platform and the limbs. By extracting the 6 x 6 block matrix from the inversion of the governing compliance matrix, the stiffness of the moving platform is formulated. The computation for the stiffness of the Exechon PKM at a typical configuration as well as throughout the workspace is carried out in a quick manner with a piece-by-piece partition algorithm. The numerical simulations reveal a strong position-dependency of the PKM's stiffness in that it is symmetric relative to a work plane due to structural features. At the last stage, the effects of some design variables such as structural, dimensional and stiffness parameters on system rigidity are investigated with the purpose of providing useful information for the structural optimization and performance enhancement of the Exechon PKM. It is worthy mentioning that the proposed methodology of stiffness modeling in this paper can also be applied to other overconstrained PKMs and can evaluate the global rigidity over workplace efficiently with minor revisions.
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.
Resumo:
Semi-qualitative probabilistic networks (SQPNs) merge two important graphical model formalisms: Bayesian networks and qualitative probabilistic networks. They provide a very general modeling framework by allowing the combination of numeric and qualitative assessments over a discrete domain, and can be compactly encoded by exploiting the same factorization of joint probability distributions that are behind the Bayesian networks. This paper explores the computational complexity of semi-qualitative probabilistic networks, and takes the polytree-shaped networks as its main target. We show that the inference problem is coNP-Complete for binary polytrees with multiple observed nodes. We also show that inferences can be performed in linear time if there is a single observed node, which is a relevant practical case. Because our proof is constructive, we obtain an efficient linear time algorithm for SQPNs under such assumptions. To the best of our knowledge, this is the first exact polynomial-time algorithm for SQPNs. Together these results provide a clear picture of the inferential complexity in polytree-shaped SQPNs.
Resumo:
We study transitionless quantum driving in an infinite-range many-body system described by the Lipkin-Meshkov-Glick model. Despite the correlation length being always infinite the closing of the gap at the critical point makes the driving Hamiltonian of increasing complexity also in this case. To this aim we develop a hybrid strategy combining a shortcut to adiabaticity and optimal control that allows us to achieve remarkably good performance in suppressing the defect production across the phase transition.
Resumo:
The increasing complexity and scale of cloud computing environments due to widespread data centre heterogeneity makes measurement-based evaluations highly difficult to achieve. Therefore the use of simulation tools to support decision making in cloud computing environments to cope with this problem is an increasing trend. However the data required in order to model cloud computing environments with an appropriate degree of accuracy is typically large, very difficult to collect without some form of automation, often not available in a suitable format and a time consuming process if done manually. In this research, an automated method for cloud computing topology definition, data collection and model creation activities is presented, within the context of a suite of tools that have been developed and integrated to support these activities.
Secure D2D Communication in Large-Scale Cognitive Cellular Networks: A Wireless Power Transfer Model
Resumo:
In this paper, we investigate secure device-to-device (D2D) communication in energy harvesting large-scale cognitive cellular networks. The energy constrained D2D transmitter harvests energy from multiantenna equipped power beacons (PBs), and communicates with the corresponding receiver using the spectrum of the primary base stations (BSs). We introduce a power transfer model and an information signal model to enable wireless energy harvesting and secure information transmission. In the power transfer model, three wireless power transfer (WPT) policies are proposed: 1) co-operative power beacons (CPB) power transfer, 2) best power beacon (BPB) power transfer, and 3) nearest power beacon (NPB) power transfer. To characterize the power transfer reliability of the proposed three policies, we derive new expressions for the exact power outage probability. Moreover, the analysis of the power outage probability is extended to the case when PBs are equipped with large antenna arrays. In the information signal model, we present a new comparative framework with two receiver selection schemes: 1) best receiver selection (BRS), where the receiver with the strongest channel is selected; and 2) nearest receiver selection (NRS), where the nearest receiver is selected. To assess the secrecy performance, we derive new analytical expressions for the secrecy outage probability and the secrecy throughput considering the two receiver selection schemes using the proposed WPT policies. We presented Monte carlo simulation results to corroborate our analysis and show: 1) secrecy performance improves with increasing densities of PBs and D2D receivers due to larger multiuser diversity gain; 2) CPB achieves better secrecy performance than BPB and NPB but consumes more power; and 3) BRS achieves better secrecy performance than NRS but demands more instantaneous feedback and overhead. A pivotal conclusion- is reached that with increasing number of antennas at PBs, NPB offers a comparable secrecy performance to that of BPB but with a lower complexity.
Resumo:
The contemporary world is crowded of large, interdisciplinary, complex systems made of other systems, personnel, hardware, software, information, processes, and facilities. The Systems Engineering (SE) field proposes an integrated holistic approach to tackle these socio-technical systems that is crucial to take proper account of their multifaceted nature and numerous interrelationships, providing the means to enable their successful realization. Model-Based Systems Engineering (MBSE) is an emerging paradigm in the SE field and can be described as the formalized application of modelling principles, methods, languages, and tools to the entire lifecycle of those systems, enhancing communications and knowledge capture, shared understanding, improved design precision and integrity, better development traceability, and reduced development risks. This thesis is devoted to the application of the novel MBSE paradigm to the Urban Traffic & Environment domain. The proposed system, the GUILTE (Guiding Urban Intelligent Traffic & Environment), deals with a present-day real challenging problem “at the agenda” of world leaders, national governors, local authorities, research agencies, academia, and general public. The main purposes of the system are to provide an integrated development framework for the municipalities, and to support the (short-time and real-time) operations of the urban traffic through Intelligent Transportation Systems, highlighting two fundamental aspects: the evaluation of the related environmental impacts (in particular, the air pollution and the noise), and the dissemination of information to the citizens, endorsing their involvement and participation. These objectives are related with the high-level complex challenge of developing sustainable urban transportation networks. The development process of the GUILTE system is supported by a new methodology, the LITHE (Agile Systems Modelling Engineering), which aims to lightening the complexity and burdensome of the existing methodologies by emphasizing agile principles such as continuous communication, feedback, stakeholders involvement, short iterations and rapid response. These principles are accomplished through a universal and intuitive SE process, the SIMILAR process model (which was redefined at the light of the modern international standards), a lean MBSE method, and a coherent System Model developed through the benchmark graphical modeling languages SysML and OPDs/OPL. The main contributions of the work are, in their essence, models and can be settled as: a revised process model for the SE field, an agile methodology for MBSE development environments, a graphical tool to support the proposed methodology, and a System Model for the GUILTE system. The comprehensive literature reviews provided for the main scientific field of this research (SE/MBSE) and for the application domain (Traffic & Environment) can also be seen as a relevant contribution.
Resumo:
We present an improved, biologically inspired and multiscale keypoint operator. Models of single- and double-stopped hypercomplex cells in area V1 of the mammalian visual cortex are used to detect stable points of high complexity at multiple scales. Keypoints represent line and edge crossings, junctions and terminations at fine scales, and blobs at coarse scales. They are detected by applying first and second derivatives to responses of complex cells in combination with two inhibition schemes to suppress responses along lines and edges. A number of optimisations make our new algorithm much faster than previous biologically inspired models, achieving real-time performance on modern GPUs and competitive speeds on CPUs. In this paper we show that the keypoints exhibit state-of-the-art repeatability in standardised benchmarks, often yielding best-in-class performance. This makes them interesting both in biological models and as a useful detector in practice. We also show that keypoints can be used as a data selection step, significantly reducing the complexity in state-of-the-art object categorisation. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
Recently, unethical conduct in the workplace has been a focus of literature and media. Unethical pro-organizational behavior (UPB) refers to unethical conduct that employees engage in to benefit the organization. Given the complexity of UPB, there is an increasing need to understand how and under what conditions this attitude originates within organizations. Based on a sample of 167 employees and seven organizations, results support the moderated mediation model. An ethical leader increases employees’ organizational affective commitment which increases the likelihood to engage in UPB. However, the indirect relationship diminishes when employees feel authentic at work.
Resumo:
Acute and chronic respiratory failure is one of the major and potentially life-threatening features in individuals with myotonic dystrophy type 1 (DM1). Despite several clinical demonstrations showing respiratory problems in DM1 patients, the mechanisms are still not completely understood. This study was designed to investigate whether the DMSXL transgenic mouse model for DM1 exhibits respiratory disorders and, if so, to identify the pathological changes underlying these respiratory problems. Using pressure plethysmography, we assessed the breathing function in control mice and DMSXL mice generated after large expansions of the CTG repeat in successive generations of DM1 transgenic mice. Statistical analysis of breathing function measurements revealed a significant decrease in the most relevant respiratory parameters in DMSXL mice, indicating impaired respiratory function. Histological and morphometric analysis showed pathological changes in diaphragmatic muscle of DMSXL mice, characterized by an increase in the percentage of type I muscle fibers, the presence of central nuclei, partial denervation of end-plates (EPs) and a significant reduction in their size, shape complexity and density of acetylcholine receptors, all of which reflect a possible breakdown in communication between the diaphragmatic muscles fibers and the nerve terminals. Diaphragm muscle abnormalities were accompanied by an accumulation of mutant DMPK RNA foci in muscle fiber nuclei. Moreover, in DMSXL mice, the unmyelinated phrenic afferents are significantly lower. Also in these mice, significant neuronopathy was not detected in either cervical phrenic motor neurons or brainstem respiratory neurons. Because EPs are involved in the transmission of action potentials and the unmyelinated phrenic afferents exert a modulating influence on the respiratory drive, the pathological alterations affecting these structures might underlie the respiratory impairment detected in DMSXL mice. Understanding mechanisms of respiratory deficiency should guide pharmaceutical and clinical research towards better therapy for the respiratory deficits associated with DM1.
Resumo:
Cette thèse contribue à une théorie générale de la conception du projet. S’inscrivant dans une demande marquée par les enjeux du développement durable, l’objectif principal de cette recherche est la contribution d’un modèle théorique de la conception permettant de mieux situer l’utilisation des outils et des normes d’évaluation de la durabilité d’un projet. Les principes fondamentaux de ces instruments normatifs sont analysés selon quatre dimensions : ontologique, méthodologique, épistémologique et téléologique. Les indicateurs de certains effets contre-productifs reliés, en particulier, à la mise en compte de ces normes confirment la nécessité d’une théorie du jugement qualitatif. Notre hypothèse principale prend appui sur le cadre conceptuel offert par la notion de « principe de précaution » dont les premières formulations remontent du début des années 1970, et qui avaient précisément pour objectif de remédier aux défaillances des outils et méthodes d’évaluation scientifique traditionnelles. La thèse est divisée en cinq parties. Commençant par une revue historique des modèles classiques des théories de la conception (design thinking) elle se concentre sur l’évolution des modalités de prise en compte de la durabilité. Dans cette perspective, on constate que les théories de la « conception verte » (green design) datant du début des années 1960 ou encore, les théories de la « conception écologique » (ecological design) datant des années 1970 et 1980, ont finalement convergé avec les récentes théories de la «conception durable» (sustainable design) à partir du début des années 1990. Les différentes approches du « principe de précaution » sont ensuite examinées sous l’angle de la question de la durabilité du projet. Les standards d’évaluation des risques sont comparés aux approches utilisant le principe de précaution, révélant certaines limites lors de la conception d’un projet. Un premier modèle théorique de la conception intégrant les principales dimensions du principe de précaution est ainsi esquissé. Ce modèle propose une vision globale permettant de juger un projet intégrant des principes de développement durable et se présente comme une alternative aux approches traditionnelles d’évaluation des risques, à la fois déterministes et instrumentales. L’hypothèse du principe de précaution est dès lors proposée et examinée dans le contexte spécifique du projet architectural. Cette exploration débute par une présentation de la notion classique de «prudence» telle qu’elle fut historiquement utilisée pour guider le jugement architectural. Qu’en est-il par conséquent des défis présentés par le jugement des projets d’architecture dans la montée en puissance des méthodes d’évaluation standardisées (ex. Leadership Energy and Environmental Design; LEED) ? La thèse propose une réinterprétation de la théorie de la conception telle que proposée par Donald A. Schön comme une façon de prendre en compte les outils d’évaluation tels que LEED. Cet exercice révèle cependant un obstacle épistémologique qui devra être pris en compte dans une reformulation du modèle. En accord avec l’épistémologie constructiviste, un nouveau modèle théorique est alors confronté à l’étude et l’illustration de trois concours d'architecture canadienne contemporains ayant adopté la méthode d'évaluation de la durabilité normalisée par LEED. Une série préliminaire de «tensions» est identifiée dans le processus de la conception et du jugement des projets. Ces tensions sont ensuite catégorisées dans leurs homologues conceptuels, construits à l’intersection du principe de précaution et des théories de la conception. Ces tensions se divisent en quatre catégories : (1) conceptualisation - analogique/logique; (2) incertitude - épistémologique/méthodologique; (3) comparabilité - interprétation/analytique, et (4) proposition - universalité/ pertinence contextuelle. Ces tensions conceptuelles sont considérées comme autant de vecteurs entrant en corrélation avec le modèle théorique qu’elles contribuent à enrichir sans pour autant constituer des validations au sens positiviste du terme. Ces confrontations au réel permettent de mieux définir l’obstacle épistémologique identifié précédemment. Cette thèse met donc en évidence les impacts généralement sous-estimés, des normalisations environnementales sur le processus de conception et de jugement des projets. Elle prend pour exemple, de façon non restrictive, l’examen de concours d'architecture canadiens pour bâtiments publics. La conclusion souligne la nécessité d'une nouvelle forme de « prudence réflexive » ainsi qu’une utilisation plus critique des outils actuels d’évaluation de la durabilité. Elle appelle une instrumentalisation fondée sur l'intégration globale, plutôt que sur l'opposition des approches environnementales.
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
Cette thèse a pour but d’améliorer l’automatisation dans l’ingénierie dirigée par les modèles (MDE pour Model Driven Engineering). MDE est un paradigme qui promet de réduire la complexité du logiciel par l’utilisation intensive de modèles et des transformations automatiques entre modèles (TM). D’une façon simplifiée, dans la vision du MDE, les spécialistes utilisent plusieurs modèles pour représenter un logiciel, et ils produisent le code source en transformant automatiquement ces modèles. Conséquemment, l’automatisation est un facteur clé et un principe fondateur de MDE. En plus des TM, d’autres activités ont besoin d’automatisation, e.g. la définition des langages de modélisation et la migration de logiciels. Dans ce contexte, la contribution principale de cette thèse est de proposer une approche générale pour améliorer l’automatisation du MDE. Notre approche est basée sur la recherche méta-heuristique guidée par les exemples. Nous appliquons cette approche sur deux problèmes importants de MDE, (1) la transformation des modèles et (2) la définition précise de langages de modélisation. Pour le premier problème, nous distinguons entre la transformation dans le contexte de la migration et les transformations générales entre modèles. Dans le cas de la migration, nous proposons une méthode de regroupement logiciel (Software Clustering) basée sur une méta-heuristique guidée par des exemples de regroupement. De la même façon, pour les transformations générales, nous apprenons des transformations entre modèles en utilisant un algorithme de programmation génétique qui s’inspire des exemples des transformations passées. Pour la définition précise de langages de modélisation, nous proposons une méthode basée sur une recherche méta-heuristique, qui dérive des règles de bonne formation pour les méta-modèles, avec l’objectif de bien discriminer entre modèles valides et invalides. Les études empiriques que nous avons menées, montrent que les approches proposées obtiennent des bons résultats tant quantitatifs que qualitatifs. Ceux-ci nous permettent de conclure que l’amélioration de l’automatisation du MDE en utilisant des méthodes de recherche méta-heuristique et des exemples peut contribuer à l’adoption plus large de MDE dans l’industrie à là venir.