933 resultados para Set of Weak Stationary Dynamic Actions
Resumo:
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation.
Resumo:
In this paper, we study the generic hyperbolicity of equilibria of a reaction-diffusion system with respect to nonlinear terms in the set of C(2)-functions equipped with the Whitney Topology. To accomplish this, we combine Baire`s Lemma and the usual Transversality Theorem. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Sensors and actuators based on piezoelectric plates have shown increasing demand in the field of smart structures, including the development of actuators for cooling and fluid-pumping applications and transducers for novel energy-harvesting devices. This project involves the development of a topology optimization formulation for dynamic design of piezoelectric laminated plates aiming at piezoelectric sensors, actuators and energy-harvesting applications. It distributes piezoelectric material over a metallic plate in order to achieve a desired dynamic behavior with specified resonance frequencies, modes, and enhanced electromechanical coupling factor (EMCC). The finite element employs a piezoelectric plate based on the MITC formulation, which is reliable, efficient and avoids the shear locking problem. The topology optimization formulation is based on the PEMAP-P model combined with the RAMP model, where the design variables are the pseudo-densities that describe the amount of piezoelectric material at each finite element and its polarization sign. The design problem formulated aims at designing simultaneously an eigenshape, i.e., maximizing and minimizing vibration amplitudes at certain points of the structure in a given eigenmode, while tuning the eigenvalue to a desired value and also maximizing its EMCC, so that the energy conversion is maximized for that mode. The optimization problem is solved by using sequential linear programming. Through this formulation, a design with enhancing energy conversion in the low-frequency spectrum is obtained, by minimizing a set of first eigenvalues, enhancing their corresponding eigenshapes while maximizing their EMCCs, which can be considered an approach to the design of energy-harvesting devices. The implementation of the topology optimization algorithm and some results are presented to illustrate the method.
Resumo:
A critical set in a latin square of order n is a set of entries in a latin square which can be embedded in precisely one latin square of order n. Also, if any element of the critical set is deleted, the remaining set can be embedded in more than one latin square of order n. In this paper we find smallest weak and smallest totally weak critical sets for all the latin squares of orders six and seven. Moreover, we computationally prove that there is no (totally) weak critical set in the back circulant latin square of order five and we find a totally weak critical set of size seven in the other main class of latin squares of order five.
Resumo:
This paper addresses the impact of the CO2 opportunity cost on the wholesale electricity price in the context of the Iberian electricity market (MIBEL), namely on the Portuguese system, for the period corresponding to the Phase II of the European Union Emission Trading Scheme (EU ETS). In the econometric analysis a vector error correction model (VECM) is specified to estimate both long–run equilibrium relations and short–run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The model is estimated using daily spot market prices and the four commodities prices are jointly modelled as endogenous variables. Moreover, a set of exogenous variables is incorporated in order to account for the electricity demand conditions (temperature) and the electricity generation mix (quantity of electricity traded according the technology used). The outcomes for the Portuguese electricity system suggest that the dynamic pass–through of carbon prices into electricity prices is strongly significant and a long–run elasticity was estimated (equilibrium relation) that is aligned with studies that have been conducted for other markets.
Resumo:
Due to the growing complexity and dynamism of many embedded application domains (including consumer electronics, robotics, automotive and telecommunications), it is increasingly difficult to react to load variations and adapt the system's performance in a controlled fashion within an useful and bounded time. This is particularly noticeable when intending to benefit from the full potential of an open distributed cooperating environment, where service characteristics are not known beforehand and tasks may exhibit unrestricted QoS inter-dependencies. This paper proposes a novel anytime adaptive QoS control policy in which the online search for the best set of QoS levels is combined with each user's personal preferences on their services' adaptation behaviour. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves as the algorithms are given more time to run, with a minimum overhead when compared against their traditional versions.
Resumo:
Sandwich structures with soft cores are widely used in applications where a high bending stiffness is required without compromising the global weight of the structure, as well as in situations where good thermal and damping properties are important parameters to observe. As equivalent single layer approaches are not the more adequate to describe realistically the kinematics and the stresses distributions as well as the dynamic behaviour of this type of sandwiches, where shear deformations and the extensibility of the core can be very significant, layerwise models may provide better solutions. Additionally and in connection with this multilayer approach, the selection of different shear deformation theories according to the nature of the material that constitutes the core and the outer skins can predict more accurately the sandwich behaviour. In the present work the authors consider the use of different shear deformation theories to formulate different layerwise models, implemented through kriging-based finite elements. The viscoelastic material behaviour, associated to the sandwich core, is modelled using the complex approach and the dynamic problem is solved in the frequency domain. The outer elastic layers considered in this work may also be made from different nanocomposites. The performance of the models developed is illustrated through a set of test cases. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Most of today’s systems, especially when related to the Web or to multi-agent systems, are not standalone or independent, but are part of a greater ecosystem, where they need to interact with other entities, react to complex changes in the environment, and act both over its own knowledge base and on the external environment itself. Moreover, these systems are clearly not static, but are constantly evolving due to the execution of self updates or external actions. Whenever actions and updates are possible, the need to ensure properties regarding the outcome of performing such actions emerges. Originally purposed in the context of databases, transactions solve this problem by guaranteeing atomicity, consistency, isolation and durability of a special set of actions. However, current transaction solutions fail to guarantee such properties in dynamic environments, since they cannot combine transaction execution with reactive features, or with the execution of actions over domains that the system does not completely control (thus making rolling back a non-viable proposition). In this thesis, we investigate what and how transaction properties can be ensured over these dynamic environments. To achieve this goal, we provide logic-based solutions, based on Transaction Logic, to precisely model and execute transactions in such environments, and where knowledge bases can be defined by arbitrary logic theories.
Resumo:
The sequence of pitches which form a musical melody can be transposed or inverted. Since the 1970s, music theorists have modeled musical transposition and inversion in terms of an action of the dihedral group of order 24. More recently music theorists have found an intriguing second way that the dihedral group of order 24 acts on the set of major and minor triads. We illustrate both geometrically and algebraically how these two actions are dual. Both actions and their duality have been used to analyze works of music as diverse as Hindemith and the Beatles.
Resumo:
The international Functional Annotation Of the Mammalian Genomes 4 (FANTOM4) research collaboration set out to better understand the transcriptional network that regulates macrophage differentiation and to uncover novel components of the transcriptome employing a series of high-throughput experiments. The primary and unique technique is cap analysis of gene expression (CAGE), sequencing mRNA 5'-ends with a second-generation sequencer to quantify promoter activities even in the absence of gene annotation. Additional genome-wide experiments complement the setup including short RNA sequencing, microarray gene expression profiling on large-scale perturbation experiments and ChIP-chip for epigenetic marks and transcription factors. All the experiments are performed in a differentiation time course of the THP-1 human leukemic cell line. Furthermore, we performed a large-scale mammalian two-hybrid (M2H) assay between transcription factors and monitored their expression profile across human and mouse tissues with qRT-PCR to address combinatorial effects of regulation by transcription factors. These interdependent data have been analyzed individually and in combination with each other and are published in related but distinct papers. We provide all data together with systematic annotation in an integrated view as resource for the scientific community (http://fantom.gsc.riken.jp/4/). Additionally, we assembled a rich set of derived analysis results including published predicted and validated regulatory interactions. Here we introduce the resource and its update after the initial release.
Resumo:
We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.
Resumo:
In earlier work, the present authors have shown that hardness profiles are less dependent on the level of calculation than energy profiles for potential energy surfaces (PESs) having pathological behaviors. At variance with energy profiles, hardness profiles always show the correct number of stationary points. This characteristic has been used to indicate the existence of spurious stationary points on the PESs. In the present work, we apply this methodology to the hydrogen fluoride dimer, a classical difficult case for the density functional theory methods
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.
Resumo:
Background: Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results: Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions: Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.