975 resultados para UNIFIED APPROACH
Resumo:
Regional implementation of Integrated Pest Management (IPM) in Bundaberg production horticulture providing a unified approach to pest and disease control on an area-wide basis. This is aimed at reducing chemical dependency, increasing sustainability, profitability and enhancing biodiversity through detailed investigations, technology application, coordination, monitoring, communication and education.
Resumo:
This paper deals with the interpretation of the discrete-time optimal control problem as a scattering process in a discrete medium. We treat the discrete optimal linear regulator, constrained end-point and servo and tracking problems, providing a unified approach to these problems. This approach results in an easy derivation of the desired results as well as several new ones.
Resumo:
Quantum cell models for delocalized electrons provide a unified approach to the large NLO responses of conjugated polymers and pi-pi* spectra of conjugated molecules. We discuss exact NLO coefficients of infinite chains with noninteracting pi-electrons and finite chains with molecular Coulomb interactions V(R) in order to compare exact and self-consistent-field results, to follow the evolution from molecular to polymeric responses, and to model vibronic contributions in third-harmonic-generation spectra. We relate polymer fluorescence to the alternation delta of transfer integrals t(1+/-delta) along the chain and discuss correlated excited states and energy thresholds of conjugated polymers.
Resumo:
Uncertainties in complex dynamic systems play an important role in the prediction of a dynamic response in the mid- and high-frequency ranges. For distributed parameter systems, parametric uncertainties can be represented by random fields leading to stochastic partial differential equations. Over the past two decades, the spectral stochastic finite-element method has been developed to discretize the random fields and solve such problems. On the other hand, for deterministic distributed parameter linear dynamic systems, the spectral finite-element method has been developed to efficiently solve the problem in the frequency domain. In spite of the fact that both approaches use spectral decomposition (one for the random fields and the other for the dynamic displacement fields), very little overlap between them has been reported in literature. In this paper, these two spectral techniques are unified with the aim that the unified approach would outperform any of the spectral methods considered on their own. An exponential autocorrelation function for the random fields, a frequency-dependent stochastic element stiffness, and mass matrices are derived for the axial and bending vibration of rods. Closed-form exact expressions are derived by using the Karhunen-Loève expansion. Numerical examples are given to illustrate the unified spectral approach.
Resumo:
Hydrogenated amorphous silicon (a-Si:H) thin films have been deposited from silane using a novel photo-enhanced decomposition technique. The system comprises a hydrogen discharge lamp contained within the reaction vessel; this unified approach allows high energy photon excitation of the silane molecules without absorption by window materials or the need for mercury sensitisation. The film growth rates (exceeding 4 Angstrom/s) and material properties obtained are comparable to those of films produced by plasma-enhanced CVD techniques. The reduction of energetic charged particles in the film growth region should enable the fabrication of cleaner semiconductor/insulator interfaces in thin-film transistors.
Resumo:
Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.
Resumo:
In this paper, we provide a unified approach to solving preemptive scheduling problems with uniform parallel machines and controllable processing times. We demonstrate that a single criterion problem of minimizing total compression cost subject to the constraint that all due dates should be met can be formulated in terms of maximizing a linear function over a generalized polymatroid. This justifies applicability of the greedy approach and allows us to develop fast algorithms for solving the problem with arbitrary release and due dates as well as its special case with zero release dates and a common due date. For the bicriteria counterpart of the latter problem we develop an efficient algorithm that constructs the trade-off curve for minimizing the compression cost and the makespan.
Resumo:
In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.
Resumo:
In this paper, we present a unified approach to an energy-efficient variation-tolerant design of Discrete Wavelet Transform (DWT) in the context of image processing applications. It is to be noted that it is not necessary to produce exactly correct numerical outputs in most image processing applications. We exploit this important feature and propose a design methodology for DWT which shows energy quality tradeoffs at each level of design hierarchy starting from the algorithm level down to the architecture and circuit levels by taking advantage of the limited perceptual ability of the Human Visual System. A unique feature of this design methodology is that it guarantees robustness under process variability and facilitates aggressive voltage over-scaling. Simulation results show significant energy savings (74% - 83%) with minor degradations in output image quality and avert catastrophic failures under process variations compared to a conventional design. © 2010 IEEE.
Resumo:
While the existence of an ‘emotional turn’ within the social sciences is now widely acknowledged, some areas have garnered less specific attention than others. Perhaps the most significant absence within this literature is an explicit exploration of the relationship between emotions and relations of power and domination. This article will attempt such an endeavour. In doing so, I will draw on some key work from within the sociology of emotions, such as Barbalet, Collins, Kemper and Turner, and from the power literature within social theory more generally, including Dahl, Elias, Foucault, Giddens, Gramsci and Lukes. The main thrust of the argument is that power and emotion are conceptual twins in need of a serious theoretical reunion, and that emotions have played a largely unacknowledged, ‘under-labouring’ role within most theories of power. The need for a more unified approach to these two concepts is highlighted.
RadiaLE: A framework for designing and assessing link quality estimators in wireless sensor networks
Resumo:
Stringent cost and energy constraints impose the use of low-cost and low-power radio transceivers in large-scale wireless sensor networks (WSNs). This fact, together with the harsh characteristics of the physical environment, requires a rigorous WSN design. Mechanisms for WSN deployment and topology control, MAC and routing, resource and mobility management, greatly depend on reliable link quality estimators (LQEs). This paper describes the RadiaLE framework, which enables the experimental assessment, design and optimization of LQEs. RadiaLE comprises (i) the hardware components of the WSN testbed and (ii) a software tool for setting-up and controlling the experiments, automating link measurements gathering through packets-statistics collection, and analyzing the collected data, allowing for LQEs evaluation. We also propose a methodology that allows (i) to properly set different types of links and different types of traffic, (ii) to collect rich link measurements, and (iii) to validate LQEs using a holistic and unified approach. To demonstrate the validity and usefulness of RadiaLE, we present two case studies: the characterization of low-power links and a comparison between six representative LQEs. We also extend the second study for evaluating the accuracy of the TOSSIM 2 channel model.
Resumo:
La modélisation de l’expérience de l’utilisateur dans les Interactions Homme-Machine est un enjeu important pour la conception et le développement des systèmes adaptatifs intelligents. Dans ce contexte, une attention particulière est portée sur les réactions émotionnelles de l’utilisateur, car elles ont une influence capitale sur ses aptitudes cognitives, comme la perception et la prise de décision. La modélisation des émotions est particulièrement pertinente pour les Systèmes Tutoriels Émotionnellement Intelligents (STEI). Ces systèmes cherchent à identifier les émotions de l’apprenant lors des sessions d’apprentissage, et à optimiser son expérience d’interaction en recourant à diverses stratégies d’interventions. Cette thèse vise à améliorer les méthodes de modélisation des émotions et les stratégies émotionnelles utilisées actuellement par les STEI pour agir sur les émotions de l’apprenant. Plus précisément, notre premier objectif a été de proposer une nouvelle méthode pour détecter l’état émotionnel de l’apprenant, en utilisant différentes sources d’informations qui permettent de mesurer les émotions de façon précise, tout en tenant compte des variables individuelles qui peuvent avoir un impact sur la manifestation des émotions. Pour ce faire, nous avons développé une approche multimodale combinant plusieurs mesures physiologiques (activité cérébrale, réactions galvaniques et rythme cardiaque) avec des variables individuelles, pour détecter une émotion très fréquemment observée lors des sessions d’apprentissage, à savoir l’incertitude. Dans un premier lieu, nous avons identifié les indicateurs physiologiques clés qui sont associés à cet état, ainsi que les caractéristiques individuelles qui contribuent à sa manifestation. Puis, nous avons développé des modèles prédictifs permettant de détecter automatiquement cet état à partir des différentes variables analysées, à travers l’entrainement d’algorithmes d’apprentissage machine. Notre deuxième objectif a été de proposer une approche unifiée pour reconnaître simultanément une combinaison de plusieurs émotions, et évaluer explicitement l’impact de ces émotions sur l’expérience d’interaction de l’apprenant. Pour cela, nous avons développé une plateforme hiérarchique, probabiliste et dynamique permettant de suivre les changements émotionnels de l'apprenant au fil du temps, et d’inférer automatiquement la tendance générale qui caractérise son expérience d’interaction à savoir : l’immersion, le blocage ou le décrochage. L’immersion correspond à une expérience optimale : un état dans lequel l'apprenant est complètement concentré et impliqué dans l’activité d’apprentissage. L’état de blocage correspond à une tendance d’interaction non optimale où l'apprenant a de la difficulté à se concentrer. Finalement, le décrochage correspond à un état extrêmement défavorable où l’apprenant n’est plus du tout impliqué dans l’activité d’apprentissage. La plateforme proposée intègre trois modalités de variables diagnostiques permettant d’évaluer l’expérience de l’apprenant à savoir : des variables physiologiques, des variables comportementales, et des mesures de performance, en combinaison avec des variables prédictives qui représentent le contexte courant de l’interaction et les caractéristiques personnelles de l'apprenant. Une étude a été réalisée pour valider notre approche à travers un protocole expérimental permettant de provoquer délibérément les trois tendances ciblées durant l’interaction des apprenants avec différents environnements d’apprentissage. Enfin, notre troisième objectif a été de proposer de nouvelles stratégies pour influencer positivement l’état émotionnel de l’apprenant, sans interrompre la dynamique de la session d’apprentissage. Nous avons à cette fin introduit le concept de stratégies émotionnelles implicites : une nouvelle approche pour agir subtilement sur les émotions de l’apprenant, dans le but d’améliorer son expérience d’apprentissage. Ces stratégies utilisent la perception subliminale, et plus précisément une technique connue sous le nom d’amorçage affectif. Cette technique permet de solliciter inconsciemment les émotions de l’apprenant, à travers la projection d’amorces comportant certaines connotations affectives. Nous avons mis en œuvre une stratégie émotionnelle implicite utilisant une forme particulière d’amorçage affectif à savoir : le conditionnement évaluatif, qui est destiné à améliorer de façon inconsciente l’estime de soi. Une étude expérimentale a été réalisée afin d’évaluer l’impact de cette stratégie sur les réactions émotionnelles et les performances des apprenants.
Resumo:
La computación evolutiva y muy especialmente los algoritmos genéticos son cada vez más empleados en las organizaciones para resolver sus problemas de gestión y toma de decisiones (Apoteker & Barthelemy, 2000). La literatura al respecto es creciente y algunos estados del arte han sido publicados. A pesar de esto, no hay un trabajo explícito que evalúe de forma sistemática el uso de los algoritmos genéticos en problemas específicos de los negocios internacionales (ejemplos de ello son la logística internacional, el comercio internacional, el mercadeo internacional, las finanzas internacionales o estrategia internacional). El propósito de este trabajo de grado es, por lo tanto, realizar un estado situacional de las aplicaciones de los algoritmos genéticos en los negocios internacionales.
Resumo:
We look at at the empirical validity of Schelling’s models for racial residential segregation applied to the case of Chicago. Most of the empirical literature has focused exclusively the single neighborhood model, also known as the tipping point model and neglected a multineighborhood approach or a unified approach. The multi-neighborhood approach introduced spatial interaction across the neighborhoods, in particular we look at spatial interaction across neighborhoods sharing a border. An initial exploration of the data indicates that spatial contiguity might be relevant to properly analyse the so call tipping phenomena of predominately non-Hispanic white neighborhoods to predominantly minority neighborhoods within a decade. We introduce an econometric model that combines an approach to estimate tipping point using threshold effects and a spatial autoregressive model. The estimation results from the model disputes the existence of a tipping point, that is a discontinuous change in the rate of growth of the non-Hispanic white population due to a small increase in the minority share of the neighborhood. In addition we find that racial distance between the neighborhood of interest and it surrounding neighborhoods has an important effect on the dynamics of racial segregation in Chicago.
Resumo:
This paper presents an enhanced hypothesis verification strategy for 3D object recognition. A new learning methodology is presented which integrates the traditional dichotomic object-centred and appearance-based representations in computer vision giving improved hypothesis verification under iconic matching. The "appearance" of a 3D object is learnt using an eigenspace representation obtained as it is tracked through a scene. The feature representation implicitly models the background and the objects observed enabling the segmentation of the objects from the background. The method is shown to enhance model-based tracking, particularly in the presence of clutter and occlusion, and to provide a basis for identification. The unified approach is discussed in the context of the traffic surveillance domain. The approach is demonstrated on real-world image sequences and compared to previous (edge-based) iconic evaluation techniques.