878 resultados para Observational techniques and algorithms
Resumo:
This article contributes to understanding the conditions of social-ecological change by focusing on the agency of individuals in the pathways to institutionalization. Drawing on the case of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), it addresses institutional entrepreneurship in an emerging environmental science-policy institution (ESPI) at a global scale. Drawing on ethnographic observations, semistructured interviews, and document analysis, we propose a detailed chronology of the genesis of the IPBES before focusing on the final phase of the negotiations toward the creation of the institution. We analyze the techniques and skills deployed by the chairman during the conference to handle the tensions at play both to prevent participants from deserting the negotiations arena and to prevent a lack of inclusiveness from discrediting the future institution. We stress that creating a new global environmental institution requires the situated exercise of an art of “having everybody on board” through techniques of inclusiveness that we characterize. Our results emphazise the major challenge of handling the fragmentation and plasticity of the groups of interest involved in the institutionalization process, thus adding to the theory of transformative agency of institutional entrepreneurs. Although inclusiveness might remain partly unattainable, such techniques of inclusiveness appear to be a major condition of the legitimacy and success of the institutionalization of a new global ESPI. Our results also add to the literature on boundary making within ESPIs by emphasizing the multiplicity and plasticity of the groups actually at stake.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
The photochemistry of pesticides triadimenol and triadimefon was studied on cellulose and beta-cyclodextrin (beta-CD) in controlled and natural conditions, using diffuse reflectance techniques and chromatographic analysis. The photochemistry of triadimenol occurs from the chlorophenoxyl moiety, while the photodegradation of triadimefon also involves the carbonyl group. The formation of 4-chlorophenoxyl radical is one of the major reaction pathways for both pesticides and leads to 4-chlorophenol. Triadimenol also undergoes photooxidation and dechlorination, leading to triadimefon and dechlorinated triadimenol, respectively. The other main reaction process of triadimefon involves alpha-cleavage from the carbonyl group, leading to decarbonylated compounds. Triadimenol undergoes photodegradation at 254 nm but was found to be stable at 313 nm, while triadimefon degradates in both conditions. Both pesticides undergo photochemical decomposition under solar radiation, being the initial degradation of rate per unit area of triadimefon 1 order of magnitude higher than the observed for triadimenol in both supports. The degradation rates of the pesticides were somewhat lower in beta-CD than on cellulose. Photoproduct distribution of triadimenol and triadimefon is similar for the different irradiation conditions, indicating an intramolecular energy transfer from the chlorophenoxyl moiety to the carbonyl group in the latter pesticide.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system’s dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
In Portugal, Veterinary Pathology is developing rapidly, and in recent years we assist to the emergence of private laboratories and the restructuring of universities,polytechnics and public laboratories.The Portuguese Society of Animal Pathology,through its actions and its associates has been keeping the discussion among its peers in order to standardizethe criteria of description,classification and evaluation of cases which are the subject of our daily work.One of the last challenges is associated with the use of routine histochemical techniques and immunohistochemistry, in an effort to establish standardized panels for tumour diagnosis, which could eventually reduce each analysis cost.For this purpose a simple survey was built, in which all collaborators answered questions about the markers used for carcinoma, sarcoma and round cell tumour diagnosis, as well as general questions related with the subject. We obtained twenty-one answered to the questions, from public and private laboratories.In general, in most cases immunohistochemical and histochemical methods are used for diagnosis.The wide spectrum cytokeratins are universally used to confirm carcinoma, and vimentin for sarcoma. The CD3 marker is used by all laboratories to identify T lymphocytes. For the diagnosis of B-cell lymphoma, the marker used is not consensual. In each laboratory there are different markers for more specific situations and only two labs perform PCR techniques for diagnosis. These data will be presented to promote extended discussion,namely to reach a consensus when different markers are used.
Resumo:
The historical challenge of environmental impact assessment (EIA) has been to predict project-based impacts accurately. Both EIA legislation and the practice of EIA have evolved over the last three decades in Canada, and the development of the discipline and science of environmental assessment has improved how we apply environmental assessment to complex projects. The practice of environmental assessment integrates the social and natural sciences and relies on an eclectic knowledge base from a wide range of sources. EIA methods and tools provide a means to structure and integrate knowledge in order to evaluate and predict environmental impacts.----- This Chapter will provide a brief overview of how impacts are identified and predicted. How do we determine what aspect of the natural and social environment will be affected when a mine is excavated? How does the practitioner determine the range of potential impacts, assess whether they are significant, and predict the consequences? There are no standard answers to these questions, but there are established methods to provide a foundation for scoping and predicting the potential impacts of a project.----- Of course, the community and publics play an important role in this process, and this will be discussed in subsequent chapters. In the first part of this chapter, we will deal with impact identification, which involves appplying scoping to critical issues and determining impact significance, baseline ecosystem evaluation techniques, and how to communicate environmental impacts. In the second part of the chapter, we discuss the prediction of impacts in relation to the complexity of the environment, ecological risk assessment, and modelling.
Resumo:
The construction industry has adapted information technology in its processes in terms of computer aided design and drafting, construction documentation and maintenance. The data generated within the construction industry has become increasingly overwhelming. Data mining is a sophisticated data search capability that uses classification algorithms to discover patterns and correlations within a large volume of data. This paper presents the selection and application of data mining techniques on maintenance data of buildings. The results of applying such techniques and potential benefits of utilising their results to identify useful patterns of knowledge and correlations to support decision making of improving the management of building life cycle are presented and discussed.
Resumo:
As a part of vital infrastructure and transportation networks, bridge structures must function safely at all times. However, due to heavier and faster moving vehicular loads and function adjustment, such as Busway accommodation, many bridges are now operating at an overload beyond their design capacity. Additionally, the huge renovation and replacement costs always make the infrastructure owners difficult to undertake. Structural health monitoring (SHM) is set to assess condition and foresee probable failures of designated bridge(s), so as to monitor the structural health of the bridges. The SHM systems proposed recently are incorporated with Vibration-Based Damage Detection (VBDD) techniques, Statistical Methods and Signal processing techniques and have been regarded as efficient and economical ways to solve the problem. The recent development in damage detection and condition assessment techniques based on VBDD and statistical methods are reviewed. The VBDD methods based on changes in natural frequencies, curvature/strain modes, modal strain energy (MSE) dynamic flexibility, artificial neural networks (ANN) before and after damage and other signal processing methods like Wavelet techniques and empirical mode decomposition (EMD) / Hilbert spectrum methods are discussed here.
Resumo:
Research is often characterised as the search for new ideas and understanding. The language of this view privileges the cognitive and intellectual aspects of discovery. However, in the research process theoretical claims are usually evaluated in practice and, indeed, the observations and experiences of practical circumstances often lead to new research questions. This feedback loop between speculation and experimentation is fundamental to research in many disciplines, and is also appropriate for research in the creative arts. In this chapter we will examine how our creative desire for artistic expressivity results in interplay between actions and ideas that direct the development of techniques and approaches for our audio/visual live-coding activities.
Resumo:
This paper improves implementation techniques of Elliptic Curve Cryptography. We introduce new formulae and algorithms for the group law on Jacobi quartic, Jacobi intersection, Edwards, and Hessian curves. The proposed formulae and algorithms can save time in suitable point representations. To support our claims, a cost comparison is made with classic scalar multiplication algorithms using previous and current operation counts. Most notably, the best speeds are obtained from Jacobi quartic curves which provide the fastest timings for most scalar multiplication strategies benefiting from the proposed 12M + 5S + 1D point doubling and 7M + 3S + 1D point addition algorithms. Furthermore, the new addition algorithm provides an efficient way to protect against side channel attacks which are based on simple power analysis (SPA). Keywords: Efficient elliptic curve arithmetic,unified addition, side channel attack.