919 resultados para Time Trade Off


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

New materials for OLED applications with low singlet–triplet energy splitting have been recently synthesized in order to allow for the conversion of triplet into singlet excitons (emitting light) via a Thermally Activated Delayed Fluorescence (TADF) process, which involves excited-states with a non-negligible amount of Charge-Transfer (CT). The accurate modeling of these states with Time-Dependent Density Functional Theory (TD-DFT), the most used method so far because of the favorable trade-off between accuracy and computational cost, is however particularly challenging. We carefully address this issue here by considering materials with small (high) singlet–triplet gap acting as emitter (host) in OLEDs and by comparing the accuracy of TD-DFT and the corresponding Tamm-Dancoff Approximation (TDA), which is found to greatly reduce error bars with respect to experiments thanks to better estimates for the lowest singlet–triplet transition. Finally, we quantitatively correlate the singlet–triplet splitting values with the extent of CT, using for it a simple metric extracted from calculations with double-hybrid functionals, that might be applied in further molecular engineering studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Random Walk with Restart (RWR) is an appealing measure of proximity between nodes based on graph structures. Since real graphs are often large and subject to minor changes, it is prohibitively expensive to recompute proximities from scratch. Previous methods use LU decomposition and degree reordering heuristics, entailing O(|V|^3) time and O(|V|^2) memory to compute all (|V|^2) pairs of node proximities in a static graph. In this paper, a dynamic scheme to assess RWR proximities is proposed: (1) For unit update, we characterize the changes to all-pairs proximities as the outer product of two vectors. We notice that the multiplication of an RWR matrix and its transition matrix, unlike traditional matrix multiplications, is commutative. This can greatly reduce the computation of all-pairs proximities from O(|V|^3) to O(|delta|) time for each update without loss of accuracy, where |delta| (<<|V|^2) is the number of affected proximities. (2) To avoid O(|V|^2) memory for all pairs of outputs, we also devise efficient partitioning techniques for our dynamic model, which can compute all pairs of proximities segment-wisely within O(l|V|) memory and O(|V|/l) I/O costs, where 1<=l<=|V| is a user-controlled trade-off between memory and I/O costs. (3) For bulk updates, we also devise aggregation and hashing methods, which can discard many unnecessary updates further and handle chunks of unit updates simultaneously. Our experimental results on various datasets demonstrate that our methods can be 1–2 orders of magnitude faster than other competitors while securing scalability and exactness.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article explores Ulrich Beck’s theorisation of risk society through focusing on the way in which the risk of Bt cotton is legitimated by six cultivators in Bantala, a village in Warangal, Andhra Pradesh, in India. The fieldwork for this study was conducted between June 2010 and March 2011, a duration chosen to coincide with a cotton season. The study explores the experience of the cultivators using the ‘categories of legitimation’ defined by Van Leeuwen. These are authorisation, moral evaluation, rationalisation and mythopoesis. As well as permitting an exploration of the legitimation of Bt cotton by cultivators themselves within the high-risk context of the Indian agrarian crisis, the categories also serve as an analytical framework with which to structure a discourse analysis of participant perspectives. The study examines the complex trade-off, which Renn argues the legitimation of ambiguous risk, such as that associated with Bt technology, entails. The research explores the way in which legitimation of the technology is informed by wider normative conceptualisations of development. This highlights that, in a context where indebtedness is strongly linked to farmer suicides, the potential of Bt cotton for poverty alleviation is traded against the uncertainty associated with the technology’s risks, which include its purported links to animal deaths. The study highlights the way in which the wider legitimation of a neoliberal approach to development in Andhra Pradesh serves to reinforce the choice of Bt cotton, and results in a depoliticisation of risk in Bantala. The research indicates, however, that this trade-off is subject to change over time, as economic benefits wane and risks accumulate. It also highlights the need for caution in relation to the proposed extension of Bt technology to food crops, such as Bt brinjal (aubergine).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis, a thorough investigation on acoustic noise control systems for realistic automotive scenarios is presented. The thesis is organized in two parts dealing with the main topics treated: Active Noise Control (ANC) systems and Virtual Microphone Technique (VMT), respectively. The technology of ANC allows to increase the driver's/passenger's comfort and safety exploiting the principle of mitigating the disturbing acoustic noise by the superposition of a secondary sound wave of equal amplitude but opposite phase. Performance analyses of both FeedForwrd (FF) and FeedBack (FB) ANC systems, in experimental scenarios, are presented. Since, environmental vibration noises within a car cabin are time-varying, most of the ANC solutions are adaptive. However, in this work, an effective fixed FB ANC system is proposed. Various ANC schemes are considered and compared with each other. In order to find the best possible ANC configuration which optimizes the performance in terms of disturbing noise attenuation, a thorough research of \gls{KPI}, system parameters and experimental setups design, is carried out. In the second part of this thesis, VMT, based on the estimation of specific acoustic channels, is investigated with the aim of generating a quiet acoustic zone around a confined area, e.g., the driver's ears. Performance analysis and comparison of various estimation approaches is presented. Several measurement campaigns were performed in order to acquire a sufficient duration and number of microphone signals in a significant variety of driving scenarios and employed cars. To do this, different experimental setups were designed and their performance compared. Design guidelines are given to obtain good trade-off between accuracy performance and equipment costs. Finally, a preliminary analysis with an innovative approach based on Neural Networks (NNs) to improve the current state of the art in microphone virtualization is proposed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sandy coasts represent vital areas whose preservation and maintenance also involve economic and tourist interests. Besides, these dynamic environments undergo the erosion process at different levels depending on their specific characteristics. For this reason, defence interventions are commonly realized by combining engineering solutions and management policies to evaluate their effects over time. Monitoring activities represent the fundamental instrument to obtain a deep knowledge of the investigated phenomenon. Thanks to technological development, several possibilities both in terms of geomatic surveying techniques and processing tools are available, allowing to reach high performances and accuracy. Nevertheless, when the littoral definition includes both emerged and submerged beaches, several issues have to be considered. Therefore, the geomatic surveys and all the following steps need to be calibrated according to the individual application, with the reference system, accuracy and spatial resolution as primary aspects. This study provides the evaluation of the available geomatic techniques, processing approaches, and derived products, aiming at optimising the entire workflow of coastal monitoring by adopting an accuracy-efficiency trade-off. The presented analyses highlight the balance point when the increase in performance becomes an additional value for the obtained products ensuring proper data management. This perspective can represent a helpful instrument to properly plan the monitoring activities according to the specific purposes of the analysis. Finally, the primary uses of the acquired and processed data in monitoring contexts are presented, also considering possible applications for numerical modelling as supporting tools. Moreover, the theme of coastal monitoring has been addressed throughout this thesis by considering a practical point of view, linking to the activities performed by Arpae (Regional agency for prevention, environment and energy of Emilia-Romagna). Indeed, the Adriatic coast of Emilia-Romagna, where sandy beaches particularly exposed to erosion are present, has been chosen as a case study for all the analyses and considerations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Laser-based Powder Bed Fusion (L-PBF) technology is one of the most commonly used metal Additive Manufacturing (AM) techniques to produce highly customized and value-added parts. The AlSi10Mg alloy has received more attention in the L-PBF process due to its good printability, high strength/weight ratio, corrosion resistance, and relatively low cost. However, a deep understanding of the effect of heat treatments on this alloy's metastable microstructure is still required for developing tailored heat treatments for the L-PBF AlSi10Mg alloy to overcome the limits of the as-built condition. Several authors have already investigated the effects of conventional heat treatment on the microstructure and mechanical behavior of the L-PBF AlSi10Mg alloy but often overlooked the peculiarities of the starting supersatured and ultrafine microstructure induced by rapid solidification. For this reason, the effects of innovative T6 heat treatment (T6R) on the microstructure and mechanical behavior of the L-PBF AlSi10Mg alloy were assessed. The short solution soaking time (10 min) and the relatively low temperature (510 °C) reduced the typical porosity growth at high temperatures and led to a homogeneous distribution of fine globular Si particles in the Al matrix. In addition, it increased the amount of Mg and Si in the solid solution available for precipitation hardening during the aging step. The mechanical (at room temperature and 200 °C) and tribological properties of the T6R alloy were evaluated and compared with other solutions, especially with an optimized direct-aged alloy (T5 alloy). Results showed that the innovative T6R alloy exhibits the best mechanical trade-off between strength and ductility, the highest fatigue strength among the analyzed conditions, and interesting tribological behavior. Furthermore, the high-temperature mechanical performances of the heat-treated L-PBF AlSi10Mg alloy make it suitable for structural components operating in mild service conditions at 200 °C.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ill-conditioned inverse problems frequently arise in life sciences, particularly in the context of image deblurring and medical image reconstruction. These problems have been addressed through iterative variational algorithms, which regularize the reconstruction by adding prior knowledge about the problem's solution. Despite the theoretical reliability of these methods, their practical utility is constrained by the time required to converge. Recently, the advent of neural networks allowed the development of reconstruction algorithms that can compute highly accurate solutions with minimal time demands. Regrettably, it is well-known that neural networks are sensitive to unexpected noise, and the quality of their reconstructions quickly deteriorates when the input is slightly perturbed. Modern efforts to address this challenge have led to the creation of massive neural network architectures, but this approach is unsustainable from both ecological and economic standpoints. The recently introduced GreenAI paradigm argues that developing sustainable neural network models is essential for practical applications. In this thesis, we aim to bridge the gap between theory and practice by introducing a novel framework that combines the reliability of model-based iterative algorithms with the speed and accuracy of end-to-end neural networks. Additionally, we demonstrate that our framework yields results comparable to state-of-the-art methods while using relatively small, sustainable models. In the first part of this thesis, we discuss the proposed framework from a theoretical perspective. We provide an extension of classical regularization theory, applicable in scenarios where neural networks are employed to solve inverse problems, and we show there exists a trade-off between accuracy and stability. Furthermore, we demonstrate the effectiveness of our methods in common life science-related scenarios. In the second part of the thesis, we initiate an exploration extending the proposed method into the probabilistic domain. We analyze some properties of deep generative models, revealing their potential applicability in addressing ill-posed inverse problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis project studies the agent identity privacy problem in the scalar linear quadratic Gaussian (LQG) control system. For the agent identity privacy problem in the LQG control, privacy models and privacy measures have to be established first. It depends on a trajectory of correlated data rather than a single observation. I propose here privacy models and the corresponding privacy measures by taking into account the two characteristics. The agent identity is a binary hypothesis: Agent A or Agent B. An eavesdropper is assumed to make a hypothesis testing on the agent identity based on the intercepted environment state sequence. The privacy risk is measured by the Kullback-Leibler divergence between the probability distributions of state sequences under two hypotheses. By taking into account both the accumulative control reward and privacy risk, an optimization problem of the policy of Agent B is formulated. The optimal deterministic privacy-preserving LQG policy of Agent B is a linear mapping. A sufficient condition is given to guarantee that the optimal deterministic privacy-preserving policy is time-invariant in the asymptotic regime. An independent Gaussian random variable cannot improve the performance of Agent B. The numerical experiments justify the theoretic results and illustrate the reward-privacy trade-off. Based on the privacy model and the LQG control model, I have formulated the mathematical problems for the agent identity privacy problem in LQG. The formulated problems address the two design objectives: to maximize the control reward and to minimize the privacy risk. I have conducted theoretic analysis on the LQG control policy in the agent identity privacy problem and the trade-off between the control reward and the privacy risk.Finally, the theoretic results are justified by numerical experiments. From the numerical results, I expected to have some interesting observations and insights, which are explained in the last chapter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Miniaturized flying robotic platforms, called nano-drones, have the potential to revolutionize the autonomous robots industry sector thanks to their very small form factor. The nano-drones’ limited payload only allows for a sub-100mW microcontroller unit for the on-board computations. Therefore, traditional computer vision and control algorithms are too computationally expensive to be executed on board these palm-sized robots, and we are forced to rely on artificial intelligence to trade off accuracy in favor of lightweight pipelines for autonomous tasks. However, relying on deep learning exposes us to the problem of generalization since the deployment scenario of a convolutional neural network (CNN) is often composed by different visual cues and different features from those learned during training, leading to poor inference performances. Our objective is to develop and deploy and adaptation algorithm, based on the concept of latent replays, that would allow us to fine-tune a CNN to work in new and diverse deployment scenarios. To do so we start from an existing model for visual human pose estimation, called PULPFrontnet, which is used to identify the pose of a human subject in space through its 4 output variables, and we present the design of our novel adaptation algorithm, which features automatic data gathering and labeling and on-device deployment. We therefore showcase the ability of our algorithm to adapt PULP-Frontnet to new deployment scenarios, improving the R2 scores of the four network outputs, with respect to an unknown environment, from approximately [−0.2, 0.4, 0.0,−0.7] to [0.25, 0.45, 0.2, 0.1]. Finally we demonstrate how it is possible to fine-tune our neural network in real time (i.e., under 76 seconds), using the target parallel ultra-low power GAP 8 System-on-Chip on board the nano-drone, and we show how all adaptation operations can take place using less than 2mWh of energy, a small fraction of the available battery power.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Sigma factors and the alarmone ppGpp control the allocation of RNA polymerase to promoters under stressful conditions. Both ppGpp and the sigma factor sigma(S) (RpoS) are potentially subject to variability across the species Escherichia coli. To find out the extent of strain variation we measured the level of RpoS and ppGpp using 31 E. coli strains from the ECOR collection and one reference K-12 strain. Results: Nine ECORs had highly deleterious mutations in rpoS, 12 had RpoS protein up to 7-fold above that of the reference strain MG1655 and the remainder had comparable or lower levels. Strain variation was also evident in ppGpp accumulation under carbon starvation and spoT mutations were present in several low-ppGpp strains. Three relationships between RpoS and ppGpp levels were found: isolates with zero RpoS but various ppGpp levels, strains where RpoS levels were proportional to ppGpp and a third unexpected class in which RpoS was present but not proportional to ppGpp concentration. High-RpoS and high-ppGpp strains accumulated rpoS mutations under nutrient limitation, providing a source of polymorphisms. Conclusions: The ppGpp and sigma(S) variance means that the expression of genes involved in translation, stress and other traits affected by ppGpp and/or RpoS are likely to be strain-specific and suggest that influential components of regulatory networks are frequently reset by microevolution. Different strains of E. coli have different relationships between ppGpp and RpoS levels and only some exhibit a proportionality between increasing ppGpp and RpoS levels as demonstrated for E. coli K-12.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose an alternative fidelity measure (namely, a measure of the degree of similarity) between quantum states and benchmark it against a number of properties of the standard Uhlmann-Jozsa fidelity. This measure is a simple function of the linear entropy and the Hilbert-Schmidt inner product between the given states and is thus, in comparison, not as computationally demanding. It also features several remarkable properties such as being jointly concave and satisfying all of Jozsa's axioms. The trade-off, however, is that it is supermultiplicative and does not behave monotonically under quantum operations. In addition, metrics for the space of density matrices are identified and the joint concavity of the Uhlmann-Jozsa fidelity for qubit states is established.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.