826 resultados para Effects-Based Approach to Operations
Resumo:
We describe an adaptive, mid-level approach to the wireless device power management problem. Our approach is based on reinforcement learning, a machine learning framework for autonomous agents. We describe how our framework can be applied to the power management problem in both infrastructure and ad~hoc wireless networks. From this thesis we conclude that mid-level power management policies can outperform low-level policies and are more convenient to implement than high-level policies. We also conclude that power management policies need to adapt to the user and network, and that a mid-level power management framework based on reinforcement learning fulfills these requirements.
Resumo:
Manufacturing has evolved to become a critical element of the competitive skill set of defense aerospace firms. Given the changes in the acquisition environment and culture; traditional “thrown over the wall” means of developing and manufacturing products are insufficient. Also, manufacturing systems are complex systems that need to be carefully designed in a holistic manner and there are shortcomings with available tools and methods to assist in the design of these systems. This paper outlines the generation and validation of a framework to guide this manufacturing system design process.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
Manufacturing has evolved to become a critical element of the competitive skill set of defense aerospace firms. Given the changes in the acquisition environment and culture; traditional “thrown over the wall” means of developing and manufacturing products are insufficient. Also, manufacturing systems are complex systems that need to be carefully designed in a holistic manner and there are shortcomings with available tools and methods to assist in the design of these systems. This paper outlines the generation and validation of a framework to guide this manufacturing system design process.
Resumo:
A common problem in video surveys in very shallow waters is the presence of strong light fluctuations, due to sun light refraction. Refracted sunlight casts fast moving patterns, which can significantly degrade the quality of the acquired data. Motivated by the growing need to improve the quality of shallow water imagery, we propose a method to remove sunlight patterns in video sequences. The method exploits the fact that video sequences allow several observations of the same area of the sea floor, over time. It is based on computing the image difference between a given reference frame and the temporal median of a registered set of neighboring images. A key observation is that this difference will have two components with separable spectral content. One is related to the illumination field (lower spatial frequencies) and the other to the registration error (higher frequencies). The illumination field, recovered by lowpass filtering, is used to correct the reference image. In addition to removing the sunflickering patterns, an important advantage of the approach is the ability to preserve the sharpness in corrected image, even in the presence of registration inaccuracies. The effectiveness of the method is illustrated in image sets acquired under strong camera motion containing non-rigid benthic structures. The results testify the good performance and generality of the approach
Resumo:
Hypermedia systems based on the Web for open distance education are becoming increasingly popular as tools for user-driven access learning information. Adaptive hypermedia is a new direction in research within the area of user-adaptive systems, to increase its functionality by making it personalized [Eklu 961. This paper sketches a general agents architecture to include navigational adaptability and user-friendly processes which would guide and accompany the student during hislher learning on the PLAN-G hypermedia system (New Generation Telematics Platform to Support Open and Distance Learning), with the aid of computer networks and specifically WWW technology [Marz 98-1] [Marz 98-2]. The PLAN-G actual prototype is successfully used with some informatics courses (the current version has no agents yet). The propased multi-agent system, contains two different types of adaptive autonomous software agents: Personal Digital Agents {Interface), to interacl directly with the student when necessary; and Information Agents (Intermediaries), to filtrate and discover information to learn and to adapt navigation space to a specific student
Resumo:
Not considered in the analytical model of the plant, uncertainties always dramatically decrease the performance of the fault detection task in the practice. To cope better with this prevalent problem, in this paper we develop a methodology using Modal Interval Analysis which takes into account those uncertainties in the plant model. A fault detection method is developed based on this model which is quite robust to uncertainty and results in no false alarm. As soon as a fault is detected, an ANFIS model is trained in online to capture the major behavior of the occurred fault which can be used for fault accommodation. The simulation results understandably demonstrate the capability of the proposed method for accomplishing both tasks appropriately
Resumo:
The use of electronic documents is constantly growing and the necessity to implement an ad-hoc eCertificate which manages access to private information is not only required but also necessary. This paper presents a protocol for the management of electronic identities (eIDs), meant as a substitute for the paper-based IDs, in a mobile environment with a user-centric approach. Mobile devices have been chosen because they provide mobility, personal use and high computational complexity. The inherent user-centricity also allows the user to personally manage the ID information and to display only what is required. The chosen path to develop the protocol is to migrate the existing eCert technologies implemented by the Learning Societies Laboratory in Southampton. By comparing this protocol with the analysis of the eID problem domain, a new solution has been derived which is compatible with both systems without loss of features.
Resumo:
Gender stereotypes are sets of characteristics that people believe to be typically true of a man or woman. We report an agent-based model (ABM) that simulates how stereotypes disseminate in a group through associative mechanisms. The model consists of agents that carry one of several different versions of a stereotype, which share part of their conceptual content. When an agent acts according to his/her stereotype, and that stereotype is shared by an observer, then the latter’s stereotype strengthens. Contrarily, if the agent does not act according to his/ her stereotype, then the observer’s stereotype weakens. In successive interactions, agents develop preferences, such that there will be a higher probability of interaction with agents that confirm their stereotypes. Depending on the proportion of shared conceptual content in the stereotype’s different versions, three dynamics emerge: all stereotypes in the population strengthen, all weaken, or a bifurcation occurs, i.e., some strengthen and some weaken. Additionally, we discuss the use of agent-based modeling to study social phenomena and the practical consequences that the model’s results might have on stereotype research and their effects on a community
Resumo:
El objetivo de esta monografía es examinar la transformación de la doctrina de seguridad de la OTAN en la Post-Guerra Fría y sus efectos en la intervención en la República de Macedonia. La desintegración del bloque soviético implicó la variación en la definición de las amenazas que atentan contra la supervivencia de los países miembro de la Alianza Atlántica. A partir de la década de los noventa, los conflictos de naturaleza interétnica pasaron a formar parte de los riesgos que transgreden la seguridad de los Aliados y la estabilidad del área Euro-Atlántica. Por lo anterior, la OTAN intervino en aquellos Estados en los que prevalecían las confrontaciones armadas interétnicas, como por ejemplo: en Macedonia. Allí, la Alianza Atlántica ejecutó operaciones de gestión de crisis para contrarrestar la amenaza. El fenómeno a estudiar en esta investigación será analizado a partir del Realismo Subalterno y de la Teoría de la Seguridad Colectiva.
Resumo:
An implicitly parallel method for integral-block driven restricted active space self-consistent field (RASSCF) algorithms is presented. The approach is based on a model space representation of the RAS active orbitals with an efficient expansion of the model subspaces. The applicability of the method is demonstrated with a RASSCF investigation of the first two excited states of indole
Resumo:
Industrial Organization, a Contract Based approach (aka IOCB) offers an extensive and an up-to-date panorama of Industrial Organization. It is aimed at advanced undergraduates, graduates, academics and practitioners with an interest in the field. The analysis of market interactions, business strategies and public policy is performed using the standard framework of game theory and the recent advances of contract theory and information economics
Resumo:
The main objective pursued in this thesis targets the development and systematization of a methodology that allows addressing management problems in the dynamic operation of Urban Wastewater Systems. The proposed methodology will suggest operational strategies that can improve the overall performance of the system under certain problematic situations through a model-based approach. The proposed methodology has three main steps: The first step includes the characterization and modeling of the case-study, the definition of scenarios, the evaluation criteria and the operational settings that can be manipulated to improve the system’s performance. In the second step, Monte Carlo simulations are launched to evaluate how the system performs for a wide range of operational settings combinations, and a global sensitivity analysis is conducted to rank the most influential operational settings. Finally, the third step consists on a screening methodology applying a multi-criteria analysis to select the best combinations of operational settings.
Resumo:
Zooplankton community structure (composition, diversity, dynamics and trophic relationships) of Mediterranian marshes, has been analysed by means of a size based approach. In temporary basins the shape of the biomass-size spectra is related to the hydrological cycle. Linear shape spectra are more frequent in flooding situations when nutrient input causes population growth of small-sized organisms, more than compensating for the effect of competitive interactions. During confinement conditions the scarcity of food would decrease zooplankton growth and increase intra- and interspecific interactions between zooplankton organisms which favour the greatest sizes thus leading to the appearance of curved shape spectra. Temporary and permanent basins have similar taxonomic composition but the latter have higher species diversity, a more simplified temporal pattern and a size distribution dominated mainly by smaller sizes. In permanents basins zooplankton growth is not only conditioned by the availability of resources but by the variable predation of planktivorous fish, so that the temporal variability of the spectra may also be a result of temporal differences in fish predation. Size diversity seems to be a better indicator of the degree of this community structure than species diversity. The tendency of size diversity to increase during succession makes it useful to discriminate between different succession stages, fact that is not achieved by analysing only species diversity since it is low both under large and frequent or small and rare disturbances. Amino acid composition differences found among stages of copepod species indicate a gradual change in diet during the life cycle of these copepods, which provide evidence of food niche partitioning during ontogeny, whereas Daphnia species show a relatively constant amino acid composition. There is a relationship between the degree of trophic niche overlap among stages of the different species and nutrient concentration. Copepods, which have low trophic niche overlap among stages are dominant in food-limited environments, probably because trophic niche partitioning during development allow them to reduce intraspecific competition between adults, juveniles and nauplii. Daphnia species are only dominant in water bodies or periods with high productivity, probably due to the high trophic niche overlap between juveniles and adults. These findings suggest that, in addition to the effect of interspecific competition, predation and abiotic factors, the intraspecific competition might play also an important role in structuring zooplankton assemblages.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.