25 resultados para Weighted by Sum Assured
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We report on the characterization of the specular reflection of 50 fs laser pulses in the intensity range 10(17)-10(21)Wcm(-2) obliquely incident with p-polarization onto solid density plasmas. These measurements show that the absorbed energy fraction remains approximately constant and that second harmonic generation (SHG) achieves efficiencies of 22 +/- 8% for intensities approaching 10(21)Wcm(-2). A simple model based on the relativistic oscillating mirror concept reproduces the observed intensity scaling, indicating that this is the dominant process involved for these conditions. This method may prove to be superior to SHG by sum frequency mixing in crystals as it is free from dispersion and retains high spatial coherence at high intensity.
Resumo:
We assessed ten trophodynamic indicators of ecosystem status for their sensitivity and specificity to fishing management using a size-resolved multispecies fish community model. The responses of indicators to fishing depended on effort and the size selectivity (sigmoid or Gaussian) of fishing mortality. The highest specificity against sigmoid (trawl-like) size selection was seen from inverse fishing pressure and the large fish indicator, but for Gaussian size selection, the large species indicator was most specific. Biomass, mean trophic level of the community and of the catch, and fishing in balance had the lowest specificity against both size selectivities. Length-based indicators weighted by biomass, rather than abundance, were more sensitive and specific to fishing pressure. Most indicators showed a greater response to sigmoid than Gaussian size selection. Indicators were generally more sensitive at low levels of effort because of nonlinear sensitivity in trophic cascades to fishing mortality. No single indicator emerged as superior in all respects, so given available data, multiple complementary indicators are recommended for community monitoring in the ecosystem approach to fisheries management.
Resumo:
Background:
Men and clinicians need reliable population based information when making decisions about investigation and treatment of prostate cancer. In the absence of clearly preferred treatments, differences in outcomes become more important.
Aim:
To investigate rates of adverse physical effects among prostate cancer survivors 2-15 years post diagnosis by treatment, and estimate population burden.
Methods:
A cross sectional, postal survey to 6,559 survivors (all ages) diagnosed with primary, invasive prostate cancer (ICD10-C61), identified in Northern Ireland and the Republic of Ireland via cancer registries. Questions included symptoms at diagnosis, treatments received and adverse physical effects (impotence, urinary incontinence, bowel problems, breast changes, libido loss, hot flashes, fatigue) experienced ‘ever’ and ‘current’ i.e. at questionnaire completion. Physical effect levels were weighted by age, country and time since diagnosis for all prostate cancer survivors. Bonferroni corrections were applied to account for multiple comparisons.
Results:
Adjusted response rate 54%, (n=3,348). 75% reported at least one current physical effect (90% ever), with 29% reporting at least three. These varied by treatment. Current impotence was reported by 76% post-prostatectomy, 64% post-external beam radiotherapy with hormone therapy, with average for all survivors of 57%. Urinary incontinence (overall current level: 16%) was highest post-prostatectomy (current 28%, ever 70%). 42% of brachytherapy patients reported no current adverse physical effects; however 43% reported current impotence and 8% current incontinence. Current hot flashes (41%), breast changes (18%) and fatigue (28%) were reported more commonly by patients on hormone therapy.
Conclusions:
This study provides evidence that adverse physical effects following prostate cancer represent a significant public health burden; an estimated 1.6% of men over 45 is a prostate cancer survivor with a current adverse physical effect. This information should facilitate investigation and treatment decision-making and follow-up care of patients.
Resumo:
OBJECTIVE: To document prostate cancer patient reported 'ever experienced' and 'current' prevalence of disease specific physical symptoms stratified by primary treatment received.
PATIENTS: 3,348 prostate cancer survivors 2-15 years post diagnosis.
METHODS: Cross-sectional, postal survey of 6,559 survivors diagnosed 2-15 years ago with primary, invasive PCa (ICD10-C61) identified via national, population based cancer registries in Northern Ireland and Republic of Ireland. Questions included symptoms at diagnosis, primary treatments and physical symptoms (impotence/urinary incontinence/bowel problems/breast changes/loss of libido/hot flashes/fatigue) experienced 'ever' and at questionnaire completion ("current"). Symptom proportions were weighted by age, country and time since diagnosis. Bonferroni corrections were applied for multiple comparisons.
RESULTS: Adjusted response rate 54%; 75% reported at least one 'current' physical symptom ('ever':90%), with 29% reporting at least three. Prevalence varied by treatment; overall 57% reported current impotence; this was highest following radical prostatectomy (RP)76% followed by external beam radiotherapy with concurrent hormone therapy (HT); 64%. Urinary incontinence (overall 'current' 16%) was highest following RP ('current'28%, 'ever'70%). While 42% of brachytherapy patients reported no 'current' symptoms; 43% reported 'current' impotence and 8% 'current' incontinence. 'Current' hot flashes (41%), breast changes (18%) and fatigue (28%) were reported more often by patients on HT.
CONCLUSION: Symptoms following prostate cancer are common, often multiple, persist long-term and vary by treatment. They represent a significant health burden. An estimated 1.6% of men over 45 is a prostate cancer survivor currently experiencing an adverse physical symptom. Recognition and treatment of physical symptoms should be prioritised in patient follow-up. This information should facilitate men and clinicians when deciding about treatment as differences in survival between radical treatments is minimal.
Resumo:
A ranking method assigns to every weighted directed graph a (weak) ordering of the nodes. In this paper we axiomatize the ranking method that ranks the nodes according to their outflow using four independent axioms. Besides the well-known axioms of anonymity and positive responsiveness we introduce outflow monotonicity – meaning that in pairwise comparison between two nodes, a node is not doing worse in case its own outflow does not decrease and the other node’s outflow does not increase – and order preservation – meaning that adding two weighted digraphs such that the pairwise ranking between two nodes is the same in both weighted digraphs, then this is also their pairwise ranking in the ‘sum’ weighted digraph. The outflow ranking method generalizes the ranking by outdegree for directed graphs, and therefore also generalizes the ranking by Copeland score for tournaments.
Resumo:
Results are presented for e(+) scattering by H- in the impact energy range 0less than or equal toE(0)less than or equal to10 eV. These include integrated cross sections for Ps formation in the 1s, 2s, and 2p states, as well as in an aggregate of states with ngreater than or equal to3, and for direct ionization. Differential cross sections for Ps formation in the 1s, 2s, and 2p states are also exhibited. The calculations are based on a coupled pseudostate approach employing 19 Ps pseudostates centered on the e(+). It is found that Ps formation in the 2p state dominates that in the 1s or 2s states below 8 eV, that formation in states with ngreater than or equal to3 exceeds the sum of the n=1 and n=2 cross sections above 2.5 eV, and that direct ionization outstrips total Ps formation above 6.3 eV. The threshold law (E-0-->0) for exothermic Ps formation, which includes the cases Ps(1s), Ps(2s), and Ps(2p), is shown to be 1/E-0.
Resumo:
Structural and thermodynamic properties of spherical particles carrying classical spins are investigated by Monte Carlo simulations. The potential energy is the sum of short range, purely repulsive pair contributions, and spin-spin interactions. These last are of the dipole-dipole form, with however, a crucial change of sign. At low density and high temperature the system is a homogeneous fluid of weakly interacting particles and short range spin correlations. With decreasing temperature particles condense into an equilibrium population of free floating vesicles. The comparison with the electrostatic case, giving rise to predominantly one-dimensional aggregates under similar conditions, is discussed. In both cases condensation is a continuous transformation, provided the isotropic part of the interatomic potential is purely repulsive. At low temperature the model allows us to investigate thermal and mechanical properties of membranes. At intermediate temperatures it provides a simple model to investigate equilibrium polymerization in a system giving rise to predominantly two-dimensional aggregates.
Resumo:
We consider three-sided coalition formation problems when each agent is concerned about his local status as measured by his relative rank position within the group of his own type and about his global status as measured by the weighted sum of the average rankings of the other types of groups. We show that a core stable coalition structure always exists, provided that the corresponding weights are balanced and each agent perceives the two types of status as being substitutable.
Resumo:
Nitrogen Dioxide (NO2) is known to act as an environmental trigger for many respiratory illnesses. As a pollutant it is difficult to map accurately, as concentrations can vary greatly over small distances. In this study three geostatistical techniques were compared, producing maps of NO2 concentrations in the United Kingdom (UK). The primary data source for each technique was NO2 point data, generated from background automatic monitoring and background diffusion tubes, which are analysed by different laboratories on behalf of local councils and authorities in the UK. The techniques used were simple kriging (SK), ordinary kriging (OK) and simple kriging with a locally varying mean (SKlm). SK and OK make use of the primary variable only. SKlm differs in that it utilises additional data to inform prediction, and hence potentially reduces uncertainty. The secondary data source was Oxides of Nitrogen (NOx) derived from dispersion modelling outputs, at 1km x 1km resolution for the UK. These data were used to define the locally varying mean in SKlm, using two regression approaches: (i) global regression (GR) and (ii) geographically weighted regression (GWR). Based upon summary statistics and cross-validation prediction errors, SKlm using GWR derived local means produced the most accurate predictions. Therefore, using GWR to inform SKlm was beneficial in this study.
Resumo:
Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal-ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suffers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possibilistic knowledge base as the result of merging. In such a case, we argue that it is not always necessary to keep weighted information after merging. In this paper, we define a merging operator that maps a set of possibilistic knowledge bases and a formula representing the integrity constraints to a classical knowledge base by using lexicographic ordering. We show that it satisfies nine postulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowledge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and establish the advantage of our merging operator over existing semantic merging operators in the propositional case.
Resumo:
There is considerable interest in creating embedded, speech recognition hardware using the weighted finite state transducer (WFST) technique but there are performance and memory usage challenges. Two system optimization techniques are presented to address this; one approach improves token propagation by removing the WFST epsilon input arcs; another one-pass, adaptive pruning algorithm gives a dramatic reduction in active nodes to be computed. Results for memory and bandwidth are given for a 5,000 word vocabulary giving a better practical performance than conventional WFST; this is then exploited in an adaptive pruning algorithm that reduces the active nodes from 30,000 down to 4,000 with only a 2 percent sacrifice in speech recognition accuracy; these optimizations lead to a more simplified design with deterministic performance.
Resumo:
The operation of supply chains (SCs) has for many years been focused on efficiency, leanness and responsiveness. This has resulted in reduced slack in operations, compressed cycle times, increased productivity and minimised inventory levels along the SC. Combined with tight tolerance settings for the realisation of logistics and production processes, this has led to SC performances that are frequently not robust. SCs are becoming increasingly vulnerable to disturbances, which can decrease the competitive power of the entire chain in the market. Moreover, in the case of food SCs non-robust performances may ultimately result in empty shelves in grocery stores and supermarkets.
The overall objective of this research is to contribute to Supply Chain Management (SCM) theory by developing a structured approach to assess SC vulnerability, so that robust performances of food SCs can be assured. We also aim to help companies in the food industry to evaluate their current state of vulnerability, and to improve their performance robustness through a better understanding of vulnerability issues. The following research questions (RQs) stem from these objectives:
RQ1: What are the main research challenges related to (food) SC robustness?
RQ2: What are the main elements that have to be considered in the design of robust SCs and what are the relationships between these elements?
RQ3: What is the relationship between the contextual factors of food SCs and the use of disturbance management principles?
RQ4: How to systematically assess the impact of disturbances in (food) SC processes on the robustness of (food) SC performances?
To answer these RQs we used different methodologies, both qualitative and quantitative. For each question, we conducted a literature survey to identify gaps in existing research and define the state of the art of knowledge on the related topics. For the second and third RQ, we conducted both exploration and testing on selected case studies. Finally, to obtain more detailed answers to the fourth question, we used simulation modelling and scenario analysis for vulnerability assessment.
Main findings are summarised as follows.
Based on an extensive literature review, we answered RQ1. The main research challenges were related to the need to define SC robustness more precisely, to identify and classify disturbances and their causes in the context of the specific characteristics of SCs and to make a systematic overview of (re)design strategies that may improve SC robustness. Also, we found that it is useful to be able to discriminate between varying degrees of SC vulnerability and to find a measure that quantifies the extent to which a company or SC shows robust performances when exposed to disturbances.
To address RQ2, we define SC robustness as the degree to which a SC shows an acceptable performance in (each of) its Key Performance Indicators (KPIs) during and after an unexpected event that caused a disturbance in one or more logistics processes. Based on the SCM literature we identified the main elements needed to achieve robust performances and structured them together to form a conceptual framework for the design of robust SCs. We then explained the logic of the framework and elaborate on each of its main elements: the SC scenario, SC disturbances, SC performance, sources of food SC vulnerability, and redesign principles and strategies.
Based on three case studies, we answered RQ3. Our major findings show that the contextual factors have a consistent relationship to Disturbance Management Principles (DMPs). The product and SC environment characteristics are contextual factors that are hard to change and these characteristics initiate the use of specific DMPs as well as constrain the use of potential response actions. The process and the SC network characteristics are contextual factors that are easier to change, and they are affected by the use of the DMPs. We also found a notable relationship between the type of DMP likely to be used and the particular combination of contextual factors present in the observed SC.
To address RQ4, we presented a new method for vulnerability assessments, the VULA method. The VULA method helps to identify how much a company is underperforming on a specific Key Performance Indicator (KPI) in the case of a disturbance, how often this would happen and how long it would last. It ultimately informs the decision maker about whether process redesign is needed and what kind of redesign strategies should be used in order to increase the SC’s robustness. The VULA method is demonstrated in the context of a meat SC using discrete-event simulation. The case findings show that performance robustness can be assessed for any KPI using the VULA method.
To sum-up the project, all findings were incorporated within an integrated framework for designing robust SCs. The integrated framework consists of the following steps: 1) Description of the SC scenario and identification of its specific contextual factors; 2) Identification of disturbances that may affect KPIs; 3) Definition of the relevant KPIs and identification of the main disturbances through assessment of the SC performance robustness (i.e. application of the VULA method); 4) Identification of the sources of vulnerability that may (strongly) affect the robustness of performances and eventually increase the vulnerability of the SC; 5) Identification of appropriate preventive or disturbance impact reductive redesign strategies; 6) Alteration of SC scenario elements as required by the selected redesign strategies and repeat VULA method for KPIs, as defined in Step 3.
Contributions of this research are listed as follows. First, we have identified emerging research areas - SC robustness, and its counterpart, vulnerability. Second, we have developed a definition of SC robustness, operationalized it, and identified and structured the relevant elements for the design of robust SCs in the form of a research framework. With this research framework, we contribute to a better understanding of the concepts of vulnerability and robustness and related issues in food SCs. Third, we identified the relationship between contextual factors of food SCs and specific DMPs used to maintain robust SC performances: characteristics of the product and the SC environment influence the selection and use of DMPs; processes and SC networks are influenced by DMPs. Fourth, we developed specific metrics for vulnerability assessments, which serve as a basis of a VULA method. The VULA method investigates different measures of the variability of both the duration of impacts from disturbances and the fluctuations in their magnitude.
With this project, we also hope to have delivered practical insights into food SC vulnerability. First, the integrated framework for the design of robust SCs can be used to guide food companies in successful disturbance management. Second, empirical findings from case studies lead to the identification of changeable characteristics of SCs that can serve as a basis for assessing where to focus efforts to manage disturbances. Third, the VULA method can help top management to get more reliable information about the “health” of the company.
The two most important research opportunities are: First, there is a need to extend and validate our findings related to the research framework and contextual factors through further case studies related to other types of (food) products and other types of SCs. Second, there is a need to further develop and test the VULA method, e.g.: to use other indicators and statistical measures for disturbance detection and SC improvement; to define the most appropriate KPI to represent the robustness of a complete SC. We hope this thesis invites other researchers to pick up these challenges and help us further improve the robustness of (food) SCs.