933 resultados para Optimal fusion performance
Resumo:
We consider a cooperative relaying network in which a source communicates with a group of users in the presence of one eavesdropper. We assume that there are no source-user links and the group of users receive only retransmitted signal from the relay. Whereas, the eavesdropper receives both the original and retransmitted signals. Under these assumptions, we exploit the user selection technique to enhance the secure performance. We first find the optimal power allocation strategy when the source has the full channel state information (CSI) of all links. We then evaluate the security level through: i) ergodic secrecy rate and ii) secrecy outage probability when having only the statistical knowledge of CSIs.
Resumo:
We investigate the achievable sum rate and energy efficiency of zero-forcing precoded downlink massive multiple-input multiple-output systems in Ricean fading channels. A simple and accurate approximation of the average sum rate is presented, which is valid for a system with arbitrary rank channel means. Based on this expression, the optimal power allocation strategy maximizing the average sum rate is derived. Moreover, considering a general power consumption model, the energy efficiency of the system with rank-1 channel means is characterized. Specifically, the impact of key system parameters, such as the number of users N, the number of BS antennas M, Ricean factor K and the signal-to-noise ratio (SNR) ρ are studied, and closed-form expressions for the optimal ρ and M maximizing the energy efficiency are derived. Our findings show that the optimal power allocation scheme follows the water filling principle, and it can substantially enhance the average sum rate in the presence of strong line-of-sight effect in the low SNR regime. In addition, we demonstrate that the Ricean factor K has significant impact on the optimal values of M, N and ρ.
Resumo:
The 2015 FRVT gender classification (GC) report evidences the problems that current approaches tackle in situations with large variations in pose, illumination, background and facial expression. The report suggests that both commercial and research solutions are hardly able to reach an accuracy over 90% for The Images of Groups dataset, a proven scenario exhibiting unrestricted or in the wild conditions. In this paper, we focus on this challenging dataset, stepping forward in GC performance by observing: 1) recent literature results combining multiple local descriptors, and 2) the psychophysics evidences of the greater importance of the ocular and mouth areas to solve this task...
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Il existe une multitude de façons de protéger un portefeuille formé d'obligations contre une variation des taux d'intérêt qui pourrait affecter défavorablement la valeur marchande du portefeuille. L'une d'elles consiste à vendre des contrats à terme sur obligations afin que les variations de la valeur marchande du portefeuille soient compensées par les gains (ou les pertes) sur le marché à terme. Le succès d'une telle opération dépend de l'évaluation du ratio de couverture puisque c'est lui qui déterminera quelle quantité de contrats à terme il faudra vendre pour protéger le portefeuille. L'objectif de cette étude consiste à déterminer, parmi cinq méthodes d'estimation du ratio de couverture (une naïve et quatre théoriques), celle qui permet de minimiser la variance du rendement du portefeuille à couvrir tout en sacrifiant le moins possible en terme de rendement. Pour ce faire, nous avons utilisé neuf portefeuilles formés d'obligations du gouvernement du Canada ayant des caractéristiques (coupon, échéance) très différentes que nous avons couverts en utilisant le contrat à terme sur obligations du gouvernement du Canada qui se transige à la Bourse de Montréal. L'analyse des résultats nous a amené à conclure que la méthode naïve génère de meilleurs résultats que les méthodes théoriques lorsque le portefeuille à couvrir possède des caractéristiques semblables au titre qui sert de couverture. Dans tous les autres cas (où le portefeuille à couvrir a des caractéristiques très différentes du contrat à terme qui sert de couverture), la performance de la méthode naïve est plutôt médiocre, mais aucune autre méthode n'est supérieure aux autres sur une base régulière.
Resumo:
We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.
Resumo:
When designing a new passenger ship or naval vessel or modifying an existing design, how do we ensure that the proposed design is safe from an evacuation point of view? In the wake of major maritime disasters such as the Herald of Free Enterprise and the Estonia and in light of the growth in the numbers of high density, high-speed ferries and large capacity cruise ships, issues concerned with the evacuation of passengers and crew at sea are receiving renewed interest. In the maritime industry, ship evacuation models are now recognised by IMO through the publication of the Interim Guidelines for Evacuation Analysis of New and Existing Passenger Ships including Ro-Ro. This approach offers the promise to quickly and efficiently bring evacuation considerations into the design phase, while the ship is "on the drawing board" as well as reviewing and optimising the evacuation provision of the existing fleet. Other applications of this technology include the optimisation of operating procedures for civil and naval vessels such as determining the optimal location of a feature such as a casino, organising major passenger movement events such as boarding/disembarkation or restaurant/theatre changes, determining lean manning requirements, location and number of damage control parties, etc. This paper describes the development of the maritimeEXODUS evacuation model which is fully compliant with IMO requirements and briefly presents an example application to a large passenger ferry.
Resumo:
A key driver of Australian sweetpotato productivity improvements and consumer demand has been industry adoption of disease-free planting material systems. On a farm isolated from main Australian sweetpotato areas, virus-free germplasm is annually multiplied, with subsequent 'pathogen-tested' (PT) sweetpotato roots shipped to commercial Australian sweetpotato growers. They in turn plant their PT roots into specially designated plant beds, commencing in late winter. From these beds, they cut sprouts as the basis for their commercial fields. Along with other intense agronomic practices, this system enables Australian producers to achieve worldRSQUOs highest commercial yields (per hectare) of premium sweetpotatoes. Their industry organisation, ASPG (Australian Sweetpotato Growers Inc.), has identified productivity of mother plant beds as a key driver of crop performance. Growers and scientists are currently collaborating to investigate issues such as catastrophic plant beds losses; optimisation of irrigation and nutrient addition; rapidity and uniformity of initial plant bed harvests; optimal plant bed harvest techniques; virus re-infection of plant beds; and practical longevity of plant beds. A survey of 50 sweetpotato growers in Queensland and New South Wales identified a substantial diversity in current plant bed systems, apparently influenced by growing district, scale of operation, time of planting, and machinery/labour availability. Growers identified key areas for plant bed research as: optimising the size and grading specifications of PT roots supplied for the plant beds; change in sprout density, vigour and performance through sequential cuttings of the plant bed; optimal height above ground level to cut sprouts to maximise commercial crop and plant bed performance; and use of structures and soil amendments in plant bed systems. Our ongoing multi-disciplinary research program integrates detailed agronomic experiments, grower adaptive learning sites, product quality and consumer research, to enhance industry capacity for inspired innovation and commercial, sustainable practice change.
Resumo:
A decision-maker, when faced with a limited and fixed budget to collect data in support of a multiple attribute selection decision, must decide how many samples to observe from each alternative and attribute. This allocation decision is of particular importance when the information gained leads to uncertain estimates of the attribute values as with sample data collected from observations such as measurements, experimental evaluations, or simulation runs. For example, when the U.S. Department of Homeland Security must decide upon a radiation detection system to acquire, a number of performance attributes are of interest and must be measured in order to characterize each of the considered systems. We identified and evaluated several approaches to incorporate the uncertainty in the attribute value estimates into a normative model for a multiple attribute selection decision. Assuming an additive multiple attribute value model, we demonstrated the idea of propagating the attribute value uncertainty and describing the decision values for each alternative as probability distributions. These distributions were used to select an alternative. With the goal of maximizing the probability of correct selection we developed and evaluated, under several different sets of assumptions, procedures to allocate the fixed experimental budget across the multiple attributes and alternatives. Through a series of simulation studies, we compared the performance of these allocation procedures to the simple, but common, allocation procedure that distributed the sample budget equally across the alternatives and attributes. We found the allocation procedures that were developed based on the inclusion of decision-maker knowledge, such as knowledge of the decision model, outperformed those that neglected such information. Beginning with general knowledge of the attribute values provided by Bayesian prior distributions, and updating this knowledge with each observed sample, the sequential allocation procedure performed particularly well. These observations demonstrate that managing projects focused on a selection decision so that the decision modeling and the experimental planning are done jointly, rather than in isolation, can improve the overall selection results.
Resumo:
The aim of this thesis is to review and augment the theory and methods of optimal experimental design. In Chapter I the scene is set by considering the possible aims of an experimenter prior to an experiment, the statistical methods one might use to achieve those aims and how experimental design might aid this procedure. It is indicated that, given a criterion for design, a priori optimal design will only be possible in certain instances and, otherwise, some form of sequential procedure would seem to be indicated. In Chapter 2 an exact experimental design problem is formulated mathematically and is compared with its continuous analogue. Motivation is provided for the solution of this continuous problem, and the remainder of the chapter concerns this problem. A necessary and sufficient condition for optimality of a design measure is given. Problems which might arise in testing this condition are discussed, in particular with respect to possible non-differentiability of the criterion function at the design being tested. Several examples are given of optimal designs which may be found analytically and which illustrate the points discussed earlier in the chapter. In Chapter 3 numerical methods of solution of the continuous optimal design problem are reviewed. A new algorithm is presented with illustrations of how it should be used in practice. It is shown that, for reasonably large sample size, continuously optimal designs may be approximated to well by an exact design. In situations where this is not satisfactory algorithms for improvement of this design are reviewed. Chapter 4 consists of a discussion of sequentially designed experiments, with regard to both the philosophies underlying, and the application of the methods of, statistical inference. In Chapter 5 we criticise constructively previous suggestions for fully sequential design procedures. Alternative suggestions are made along with conjectures as to how these might improve performance. Chapter 6 presents a simulation study, the aim of which is to investigate the conjectures of Chapter 5. The results of this study provide empirical support for these conjectures. In Chapter 7 examples are analysed. These suggest aids to sequential experimentation by means of reduction of the dimension of the design space and the possibility of experimenting semi-sequentially. Further examples are considered which stress the importance of the use of prior information in situations of this type. Finally we consider the design of experiments when semi-sequential experimentation is mandatory because of the necessity of taking batches of observations at the same time. In Chapter 8 we look at some of the assumptions which have been made and indicate what may go wrong where these assumptions no longer hold.
Resumo:
This document is the Online Supplement to ‘Myopic Allocation Policy with Asymptotically Optimal Sampling Rate,’ to be published in the IEEE Transactions of Automatic Control in 2017.
Resumo:
Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.
Resumo:
Libraries since their inception 4000 years ago have been in a process of constant change. Although, changes were in slow motion for centuries, in the last decades, academic libraries have been continuously striving to adapt their services to the ever-changing user needs of students and academic staff. In addition, e-content revolution, technological advances, and ever-shrinking budgets have obliged libraries to efficiently allocate their limited resources among collection and services. Unfortunately, this resource allocation is a complex process due to the diversity of data sources and formats required to be analyzed prior to decision-making, as well as the lack of efficient integration methods. The main purpose of this study is to develop an integrated model that supports libraries in making optimal budgeting and resource allocation decisions among their services and collection by means of a holistic analysis. To this end, a combination of several methodologies and structured approaches is conducted. Firstly, a holistic structure and the required toolset to holistically assess academic libraries are proposed to collect and organize the data from an economic point of view. A four-pronged theoretical framework is used in which the library system and collection are analyzed from the perspective of users and internal stakeholders. The first quadrant corresponds to the internal perspective of the library system that is to analyze the library performance, and costs incurred and resources consumed by library services. The second quadrant evaluates the external perspective of the library system; user’s perception about services quality is judged in this quadrant. The third quadrant analyses the external perspective of the library collection that is to evaluate the impact of the current library collection on its users. Eventually, the fourth quadrant evaluates the internal perspective of the library collection; the usage patterns followed to manipulate the library collection are analyzed. With a complete framework for data collection, these data coming from multiple sources and therefore with different formats, need to be integrated and stored in an adequate scheme for decision support. A data warehousing approach is secondly designed and implemented to integrate, process, and store the holistic-based collected data. Ultimately, strategic data stored in the data warehouse are analyzed and implemented for different purposes including the following: 1) Data visualization and reporting is proposed to allow library managers to publish library indicators in a simple and quick manner by using online reporting tools. 2) Sophisticated data analysis is recommended through the use of data mining tools; three data mining techniques are examined in this research study: regression, clustering and classification. These data mining techniques have been applied to the case study in the following manner: predicting the future investment in library development; finding clusters of users that share common interests and similar profiles, but belong to different faculties; and predicting library factors that affect student academic performance by analyzing possible correlations of library usage and academic performance. 3) Input for optimization models, early experiences of developing an optimal resource allocation model to distribute resources among the different processes of a library system are documented in this study. Specifically, the problem of allocating funds for digital collection among divisions of an academic library is addressed. An optimization model for the problem is defined with the objective of maximizing the usage of the digital collection over-all library divisions subject to a single collection budget. By proposing this holistic approach, the research study contributes to knowledge by providing an integrated solution to assist library managers to make economic decisions based on an “as realistic as possible” perspective of the library situation.
Resumo:
This paper presents the results of a research that aimed at identifying optimal performance standards of Brazilian public and philanthropic hospitals. In order to carry out the analysis, a model based on Data Envelopment Analysis (DEA) was developed. We collected financial data from hospitals’ financial statements available on the internet, as well as operational data from the Information Technology Department of the Brazilian Public Health Care System – SUS (DATASUS). Data from 18 hospitals from 2007 to 2011 were analyzed. Our DEA model used both operational and financial indicators (variables). In order to develop this model, two indicators were considered inputs: Values (in Brazilian Reais) of Fixed Assets and Planned Capacity. On the other hand, the following indicators were considered outputs: Net Margin, Return on Assets and Institutional Mortality Rate. As regards the proposed model, there were five hospitals with optimal performance and four hospitals were considered inefficient, upon the analysis of the variables, considering the analyzed period. Analysis of the weights indicated the most relevant variables for determining efficiency and scale variable values, which is an important tool to aid the decision-making by hospital managers. Finally, the scale variables determined the returns on production, indicating that 14 hospitals work with scale diseconomies. This may indicate inefficiency in the resource management of the Brazilian public health-care system, by analyzing this set of proposed variables.
Resumo:
Activated carbon was prepared from date pits via chemical activation with H3PO4. The effects of activating agent concentration and activation temperature on the yield and surface area were studied. The optimal activated carbon was prepared at 450 °C using 55 % H3PO4. The prepared activated carbon was characterized by Fourier transform infrared spectroscopy, scanning electron microscopy, thermogravimetric-differential thermal analysis, and Brunauer, Emmett, and Teller (BET) surface area. The prepared date pit-based activated carbon (DAC) was used for the removal of bromate (BrO3 −). The concentration of BrO3 − was determined by ultra-performance liquid chromatography-mass tandem spectrometry (UPLC-MS/MS). The experimental equilibrium data for BrO3 − adsorption onto DAC was well fitted to the Langmuir isotherm model and showed maximum monolayer adsorption capacity of 25.64 mg g−1. The adsorption kinetics of BrO3 − adsorption was very well represented by the pseudo-first-order equation. The analytical application of DAC for the analysis of real water samples was studied with very promising results.