908 resultados para Static-order-trade-off
Resumo:
Iris based identity verification is highly reliable but it can also be subject to attacks. Pupil dilation or constriction stimulated by the application of drugs are examples of sample presentation security attacks which can lead to higher false rejection rates. Suspects on a watch list can potentially circumvent the iris based system using such methods. This paper investigates a new approach using multiple parts of the iris (instances) and multiple iris samples in a sequential decision fusion framework that can yield robust performance. Results are presented and compared with the standard full iris based approach for a number of iris degradations. An advantage of the proposed fusion scheme is that the trade-off between detection errors can be controlled by setting parameters such as the number of instances and the number of samples used in the system. The system can then be operated to match security threat levels. It is shown that for optimal values of these parameters, the fused system also has a lower total error rate.
Resumo:
Since the first oil crisis in 1974, economic reasons placed energy saving among the top priorities in most industrialised countries. In the decades that followed, another, equally strong driver for energy saving emerged: climate change caused by anthropogenic emissions, a large fraction of which result from energy generation. Intrinsically linked to energy consumption and its related emissions is another problem: indoor air quality. City dwellers in industrialised nations spend over 90% of their time indoors and exposure to indoor pollutants contributes to ~2.6% of global burden of disease and nearly 2 million premature deaths per year1. Changing climate conditions, together with human expectations of comfortable thermal conditions, elevates building energy requirements for heating, cooling, lighting and the use of other electrical equipment. We believe that these changes elicit a need to understand the nexus between energy consumption and its consequent impact on indoor air quality in urban buildings. In our opinion the key questions are how energy consumption is distributed between different building services, and how the resulting pollution affects indoor air quality. The energy-pollution nexus has clearly been identified in qualitative terms; however the quantification of such a nexus to derive emissions or concentrations per unit energy consumption is still weak, inconclusive and requires forward thinking. Of course, various aspects of energy consumption and indoor air quality have been studied in detail separately, but in-depth, integrated studies of the energy-pollution nexus are hard to come by. We argue that such studies could be instrumental in providing sustainable solutions to maintain the trade-off between the energy efficiency of buildings and acceptable levels of air pollution for healthy living.
Resumo:
Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.
Resumo:
Objectives To investigate the frequency of the ACTN3 R577X polymorphism in elite endurance triathletes, and whether ACTN3 R577X is significantly associated with performance time. Design Cross-sectional study. Methods Saliva samples, questionnaires, and performance times were collected for 196 elite endurance athletes who participated in the 2008 Kona Ironman championship triathlon. Athletes were of predominantly North American, European, and Australian origin. A one-way analysis of variance was conducted to compare performance times between genotype groups. Multiple linear regression analysis was performed to model the effect of questionnaire variables and genotype on performance time. Genotype and allele frequencies were compared to results from different populations using the chi-square test. Results Performance time did not significantly differ between genotype groups, and age, sex, and continent of origin were significant predictors of finishing time (age and sex: p < 5 × 10−6; continent: p = 0.003) though genotype was not. Genotype and allele frequencies obtained (RR 26.5%, RX 50.0%, XX 23.5%, R 51.5%, X 48.5%) were found to be not significantly different from Australian, Spanish, and Italian endurance athletes (p > 0.05), but were significantly different from Kenyan, Ethiopian, and Finnish endurance athletes (p < 0.01). Conclusions Genotype and allele frequencies agreed with those reported for endurance athletes of similar ethnic origin, supporting previous findings for an association between 577X allele and endurance. However, analysis of performance time suggests that ACTN3 does not alone influence endurance performance, or may have a complex effect on endurance performance due to a speed/endurance trade-off.
Resumo:
We examine cost and nutrient use efficiency of farms and determine the cost to move farms to nutrient-efficient operation using Data Envelopment Analysis (DEA) with a dataset of 96 rice farms in Gangwon province of South Korea from 2003 to 2007. Our findings show that improvements in technical efficiency would result in both lower production costs and better environmental performance. It is, however, not costless for farms to move from their current operation to the environmentally efficient operation. On average, this movement would increase production costs by 119% but benefit the water system through an approximately 69% reduction in eutrofying power (EP). The average estimated cost of each EP kg of aggregate nutrient reduction is approximately one thousand two hundred won. For technically efficient farms, there is a trade-off between cost and environmental efficiency. We also find that the environmental performance of farms varies across farms and regions. We suggest that agri-environmental policies should be (re)designed to improve both cost and environmental performance of rice farms.
Resumo:
The literature concerning firm boundaries has focussed extensively on the rationale for different boundary choices and the economic efficiencies that such choices can make. There is also an acknowledged position that a firm’s boundary choices may impact the ability of a firm to maintain and even build new capabilities, though such choices may not be optimal from an economic efficiency perspective. It is in this context that we seek to investigate how firms make this potential trade-off in respect of their boundary choices and how these choices are implemented across a wide range of activities. Using qualitative data from three public sector construction oriented organizations, we observe that neither pure make nor buy decisions assisted significantly in capability building. Dual modes – where firms make and buy the same product or service simultaneously – provided firms with some opportunities to manage this paradox, but the most successful decisions seemed to occur in respect of using intermediate governance modes such as alliances. We also observed that the boundary choice was just one dimension of the capability building process and firms pursuing the same boundary choice decisions often had quite divergent outcomes on the basis of their boundary management and the ability of knowledge to move across firm boundaries.
Resumo:
The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.
Resumo:
Fusion techniques can be used in biometrics to achieve higher accuracy. When biometric systems are in operation and the threat level changes, controlling the trade-off between detection error rates can reduce the impact of an attack. In a fused system, varying a single threshold does not allow this to be achieved, but systematic adjustment of a set of parameters does. In this paper, fused decisions from a multi-part, multi-sample sequential architecture are investigated for that purpose in an iris recognition system. A specific implementation of the multi-part architecture is proposed and the effect of the number of parts and samples in the resultant detection error rate is analysed. The effectiveness of the proposed architecture is then evaluated under two specific cases of obfuscation attack: miosis and mydriasis. Results show that robustness to such obfuscation attacks is achieved, since lower error rates than in the case of the non-fused base system are obtained.
Resumo:
The design activities of the development of the SCRAMSPACE I scramjet-powered free-flight experiment are described in this paper. The objectives of this flight are first described together with the definition of the primary, secondary and tertiary experiments. The Scramjet configuration studied is first discussed together with the rocket motor system selected for this flight. The different flight sequences are then explained, highlighting the SCRAMSPACE I free-flyer separation and re-orientation procedures. A design trade-off study is then described considering vehicle stability, packaging, thermo-structural analysis and trajectory, discussing the alignment of the predicted performance with the mission scientific requirements. The global system architecture and instrumentation of the vehicle are then explained. The conclusions of this design phase are that a vehicle design has been produced which is able to meet the mission scientific goals and the procurement & construction of the vehicle are ongoing.
Resumo:
This series of research vignettes is aimed at sharing current and interesting research findings from our team of international Entrepreneurship researchers. In this vignette, Dr Martin Bliemel and his research team consider a trade-off entrepreneurs face when managing their network: should they form stronger relationships to acquire key resources, or should they reach out to more potential partners to access new resources?
Resumo:
Most of existing motorway traffic safety studies using disaggregate traffic flow data aim at developing models for identifying real-time traffic risks by comparing pre-crash and non-crash conditions. One of serious shortcomings in those studies is that non-crash conditions are arbitrarily selected and hence, not representative, i.e. selected non-crash data might not be the right data comparable with pre-crash data; the non-crash/pre-crash ratio is arbitrarily decided and neglects the abundance of non-crash over pre-crash conditions; etc. Here, we present a methodology for developing a real-time MotorwaY Traffic Risk Identification Model (MyTRIM) using individual vehicle data, meteorological data, and crash data. Non-crash data are clustered into groups called traffic regimes. Thereafter, pre-crash data are classified into regimes to match with relevant non-crash data. Among totally eight traffic regimes obtained, four highly risky regimes were identified; three regime-based Risk Identification Models (RIM) with sufficient pre-crash data were developed. MyTRIM memorizes the latest risk evolution identified by RIM to predict near future risks. Traffic practitioners can decide MyTRIM’s memory size based on the trade-off between detection and false alarm rates. Decreasing the memory size from 5 to 1 precipitates the increase of detection rate from 65.0% to 100.0% and of false alarm rate from 0.21% to 3.68%. Moreover, critical factors in differentiating pre-crash and non-crash conditions are recognized and usable for developing preventive measures. MyTRIM can be used by practitioners in real-time as an independent tool to make online decision or integrated with existing traffic management systems.
Resumo:
Cancer is a disease of signal transduction in which the dysregulation of the network of intracellular and extracellular signaling cascades is sufficient to thwart the cells finely-tuned biochemical control mechanisms. A keen interest in the mathematical modeling of cell signaling networks and the regulation of signal transduction has emerged in recent years, and has produced a glimmer of insight into the sophisticated feedback control and network regulation operating within cells. In this review, we present an overview of published theoretical studies on the control aspects of signal transduction, emphasizing the role and importance of mechanisms such as ‘ultrasensitivity’ and feedback loops. We emphasize that these exquisite and often subtle control strategies represent the key to orchestrating ‘simple’ signaling behaviors within the complex intracellular network, while regulating the trade-off between sensitivity and robustness to internal and external perturbations. Through a consideration of these apparent paradoxes, we explore how the basic homeostasis of the intracellular signaling network, in the face of carcinogenesis, can lead to neoplastic progression rather than cell death. A simple mathematical model is presented, furnishing a vivid illustration of how ‘control-oriented’ models of the deranged signaling networks in cancer cells may enucleate improved treatment strategies, including patient-tailored combination therapies, with the potential for reduced toxicity and more robust and potent antitumor activity.
Resumo:
The Common Scrambling Algorithm Stream Cipher (CSASC) is a shift register based stream cipher designed to encrypt digital video broadcast. CSA-SC produces a pseudo-random binary sequence that is used to mask the contents of the transmission. In this paper, we analyse the initialisation process of the CSA-SC keystream generator and demonstrate weaknesses which lead to state convergence, slid pairs and shifted keystreams. As a result, the cipher may be vulnerable to distinguishing attacks, time-memory-data trade-off attacks or slide attacks.
Resumo:
Potential conflicts exist between biodiversity conservation and climate-change mitigation as trade-offs in multiple-use land management. This study aims to evaluate public preferences for biodiversity conservation and climate-change mitigation policy considering respondents’ uncertainty on their choice. We conducted a choice experiment using land-use scenarios in the rural Kushiro watershed in northern Japan. The results showed that the public strongly wish to avoid the extinction of endangered species in preference to climate-change mitigation in the form of carbon sequestration by increasing the area of managed forest. Knowledge of the site and the respondents’ awareness of the personal benefits associated with supporting and regulating services had a positive effect on their preference for conservation plans. Thus, decision-makers should be careful about how they provide ecological information for informed choices concerning ecosystem services tradeoffs. Suggesting targets with explicit indicators will affect public preferences, as well as the willingness of the public to pay for such measures. Furthermore, the elicited-choice probabilities approach is useful for revealing the distribution of relative preferences for incomplete scenarios, thus verifying the effectiveness of indicators introduced in the experiment.
Resumo:
The work presented in this report is aimed to implement a cost-effective offline mission path planner for aerial inspection tasks of large linear infrastructures. Like most real-world optimisation problems, mission path planning involves a number of objectives which ideally should be minimised simultaneously. Understandably, the objectives of a practical optimisation problem are conflicting each other and the minimisation of one of them necessarily implies the impossibility to minimise the other ones. This leads to the need to find a set of optimal solutions for the problem; once such a set of available options is produced, the mission planning problem is reduced to a decision making problem for the mission specialists, who will choose the solution which best fit the requirements of the mission. The goal of this work is then to develop a Multi-Objective optimisation tool able to provide the mission specialists a set of optimal solutions for the inspection task amongst which the final trajectory will be chosen, given the environment data, the mission requirements and the definition of the objectives to minimise. All the possible optimal solutions of a Multi-Objective optimisation problem are said to form the Pareto-optimal front of the problem. For any of the Pareto-optimal solutions, it is impossible to improve one objective without worsening at least another one. Amongst a set of Pareto-optimal solutions, no solution is absolutely better than another and the final choice must be a trade-off of the objectives of the problem. Multi-Objective Evolutionary Algorithms (MOEAs) are recognised to be a convenient method for exploring the Pareto-optimal front of Multi-Objective optimization problems. Their efficiency is due to their parallelism architecture which allows to find several optimal solutions at each time