974 resultados para Mismatched uncertainties
Resumo:
Chimaerism was assessed in five recipients following sex mismatched allogeneic bone marrow transplantation. Techniques included karyotyping of bone marrow cells, dot blot DNA analysis of blood and bone marrow suspensions, and in vitro amplification of DNA by the polymerase chain reaction (PCR) using blood and bone marrow suspensions and stored bone marrow slides. Results of karyotypic analysis suggested complete chimaerism in four patients, while in one patient mixed chimaerism was detected. Mixed chimaerism was also detected, however, in a second patient using PCR and confirmed by dot blot analysis on all tissues examined. PCR is a sensitive tool for investigation of chimaerism following bone marrow transplantation. Since this technique does not require radioactivity, it is an attractive method for use in a clinical laboratory. This technique represents a further development in the use of DNA methodologies in the assessment of haematological disease.
Resumo:
Nanotechnology has relevance to applications in all areas of agri-food including agriculture, aquaculture, production, processing, packaging, safety and nutrition. Scientific literature indicates uncertainties in food safety aspects about using nanomaterials due to potential health risks. To date the agri-food industry's awareness and attitude towards nanotechnology have not been addressed. We surveyed the awareness and attitudes of agri-food organisations on the island of Ireland (IoI) with regards to nanotechnology. A total of 14 agri-food stakeholders were interviewed and 88 agri-food stakeholders responded to an on-line questionnaire. The findings indicate that the current awareness of nanotechnology applications in the agri-food sector on the IoI is low and respondents are neither positive nor negative towards agri-food applications of nanotechnology. Safer food, reduced waste and increased product shelf life were considered to be the most important benefits to the agri-food industry. Knowledge of practical examples of agri-food applications is limited however opportunities were identified in precision farming techniques, innovative packaging, functional ingredients and nutrition of foods, processing equipment, and safety testing. Perceived impediments to nanotechnology adoption were potential unknown human health and environmental impacts, consumer acceptance and media framing. The need for a risk assessment framework, research into long term health and environmental effects, and better engagement between scientists, government bodies, the agri-food industry and the public were identified as important.
Resumo:
This study proposes an approach to optimally allocate multiple types of flexible AC transmission system (FACTS) devices in market-based power systems with wind generation. The main objective is to maximise profit by minimising device investment cost, and the system's operating cost considering both normal conditions and possible contingencies. The proposed method accurately evaluates the long-term costs and benefits gained by FACTS devices (FDs) installation to solve a large-scale optimisation problem. The objective implies maximising social welfare as well as minimising compensations paid for generation re-scheduling and load shedding. Many technical operation constraints and uncertainties are included in problem formulation. The overall problem is solved using both particle swarm optimisations for attaining optimal FDs allocation as main problem and optimal power flow as sub-optimisation problem. The effectiveness of the proposed approach is demonstrated on modified IEEE 14-bus test system and IEEE 118-bus test system.
Resumo:
We investigate the multiplicity properties of 408 B-type stars observed in the 30 Doradus region of the Large Magellanic Cloud with multi-epoch spectroscopy from the VLT-FLAMES Tarantula Survey (VFTS). We use a cross-correlation method to estimate relative radial velocities from the helium and metal absorption lines for each of our targets. Objects with significant radial-velocity variations (and with an amplitude larger than 16 km s<sup>-1</sup>) are classified as spectroscopic binaries. We find an observed spectroscopic binary fraction (defined by periods of <10<sup>3.5</sup> d and mass ratios >0.1) for the B-type stars, f<inf>B</inf>(obs) = 0.25 ± 0.02, which appears constant across the field of view, except for the two older clusters (Hodge 301 and SL 639). These two clusters have significantly lower binary fractions of 0.08 ± 0.08 and 0.10 ± 0.09, respectively. Using synthetic populations and a model of our observed epochs and their potential biases, we constrain the intrinsic multiplicity properties of the dwarf and giant (i.e. relatively unevolved) B-type stars in 30 Dor. We obtain a present-day binary fraction f<inf>B</inf>(true) = 0.58 ± 0.11, with a flat period distribution. Within the uncertainties, the multiplicity properties of the B-type stars agree with those for the O stars in 30 Dor from the VFTS.
Resumo:
BACKGROUND: Diabetic retinopathy is an important cause of visual loss. Laser photocoagulation preserves vision in diabetic retinopathy but is currently used at the stage of proliferative diabetic retinopathy (PDR).
OBJECTIVES: The primary aim was to assess the clinical effectiveness and cost-effectiveness of pan-retinal photocoagulation (PRP) given at the non-proliferative stage of diabetic retinopathy (NPDR) compared with waiting until the high-risk PDR (HR-PDR) stage was reached. There have been recent advances in laser photocoagulation techniques, and in the use of laser treatments combined with anti-vascular endothelial growth factor (VEGF) drugs or injected steroids. Our secondary questions were: (1) If PRP were to be used in NPDR, which form of laser treatment should be used? and (2) Is adjuvant therapy with intravitreal drugs clinically effective and cost-effective in PRP?
ELIGIBILITY CRITERIA: Randomised controlled trials (RCTs) for efficacy but other designs also used.
REVIEW METHODS: Systematic review and economic modelling.
RESULTS: The Early Treatment Diabetic Retinopathy Study (ETDRS), published in 1991, was the only trial designed to determine the best time to initiate PRP. It randomised one eye of 3711 patients with mild-to-severe NPDR or early PDR to early photocoagulation, and the other to deferral of PRP until HR-PDR developed. The risk of severe visual loss after 5 years for eyes assigned to PRP for NPDR or early PDR compared with deferral of PRP was reduced by 23% (relative risk 0.77, 99% confidence interval 0.56 to 1.06). However, the ETDRS did not provide results separately for NPDR and early PDR. In economic modelling, the base case found that early PRP could be more effective and less costly than deferred PRP. Sensitivity analyses gave similar results, with early PRP continuing to dominate or having low incremental cost-effectiveness ratio. However, there are substantial uncertainties. For our secondary aims we found 12 trials of lasers in DR, with 982 patients in total, ranging from 40 to 150. Most were in PDR but five included some patients with severe NPDR. Three compared multi-spot pattern lasers against argon laser. RCTs comparing laser applied in a lighter manner (less-intensive burns) with conventional methods (more intense burns) reported little difference in efficacy but fewer adverse effects. One RCT suggested that selective laser treatment targeting only ischaemic areas was effective. Observational studies showed that the most important adverse effect of PRP was macular oedema (MO), which can cause visual impairment, usually temporary. Ten trials of laser and anti-VEGF or steroid drug combinations were consistent in reporting a reduction in risk of PRP-induced MO.
LIMITATION: The current evidence is insufficient to recommend PRP for severe NPDR.
CONCLUSIONS: There is, as yet, no convincing evidence that modern laser systems are more effective than the argon laser used in ETDRS, but they appear to have fewer adverse effects. We recommend a trial of PRP for severe NPDR and early PDR compared with deferring PRP till the HR-PDR stage. The trial would use modern laser technologies, and investigate the value adjuvant prophylactic anti-VEGF or steroid drugs.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42013005408.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
Perfect information is seldom available to man or machines due to uncertainties inherent in real world problems. Uncertainties in geographic information systems (GIS) stem from either vague/ambiguous or imprecise/inaccurate/incomplete information and it is necessary for GIS to develop tools and techniques to manage these uncertainties. There is a widespread agreement in the GIS community that although GIS has the potential to support a wide range of spatial data analysis problems, this potential is often hindered by the lack of consistency and uniformity. Uncertainties come in many shapes and forms, and processing uncertain spatial data requires a practical taxonomy to aid decision makers in choosing the most suitable data modeling and analysis method. In this paper, we: (1) review important developments in handling uncertainties when working with spatial data and GIS applications; (2) propose a taxonomy of models for dealing with uncertainties in GIS; and (3) identify current challenges and future research directions in spatial data analysis and GIS for managing uncertainties.
Resumo:
In the reinsurance market, the risks natural catastrophes pose to portfolios of properties must be quantified, so that they can be priced, and insurance offered. The analysis of such risks at a portfolio level requires a simulation of up to 800 000 trials with an average of 1000 catastrophic events per trial. This is sufficient to capture risk for a global multi-peril reinsurance portfolio covering a range of perils including earthquake, hurricane, tornado, hail, severe thunderstorm, wind storm, storm surge and riverine flooding, and wildfire. Such simulations are both computation and data intensive, making the application of high-performance computing techniques desirable.
In this paper, we explore the design and implementation of portfolio risk analysis on both multi-core and many-core computing platforms. Given a portfolio of property catastrophe insurance treaties, key risk measures, such as probable maximum loss, are computed by taking both primary and secondary uncertainties into account. Primary uncertainty is associated with whether or not an event occurs in a simulated year, while secondary uncertainty captures the uncertainty in the level of loss due to the use of simplified physical models and limitations in the available data. A combination of fast lookup structures, multi-threading and careful hand tuning of numerical operations is required to achieve good performance. Experimental results are reported for multi-core processors and systems using NVIDIA graphics processing unit and Intel Phi many-core accelerators.
Resumo:
The UK’s transportation network is supported by critical geotechnical assets (cuttings/embankments/dams) that require sustainable, cost-effective management, while maintaining an appropriate service level to meet social, economic, and environmental needs. Recent effects of extreme weather on these geotechnical assets have highlighted their vulnerability to climate variations. We have assessed the potential of surface wave data to portray the climate-related variations in mechanical properties of a clay-filled railway embankment. Seismic data were acquired bimonthly from July 2013 to November 2014 along the crest of a heritage railway embankment in southwest England. For each acquisition, the collected data were first processed to obtain a set of Rayleigh-wave dispersion and attenuation curves, referenced to the same spatial locations. These data were then analyzed to identify a coherent trend in their spatial and temporal variability. The relevance of the observed temporal variations was also verified with respect to the experimental data uncertainties. Finally, the surface wave dispersion data sets were inverted to reconstruct a time-lapse model of S-wave velocity for the embankment structure, using a least-squares laterally constrained inversion scheme. A key point of the inversion process was constituted by the estimation of a suitable initial model and the selection of adequate levels of spatial regularization. The initial model and the strength of spatial smoothing were then kept constant throughout the processing of all available data sets to ensure homogeneity of the procedure and comparability among the obtained VS sections. A continuous and coherent temporal pattern of surface wave data, and consequently of the reconstructed VS models, was identified. This pattern is related to the seasonal distribution of precipitation and soil water content measured on site.
Resumo:
In this paper, a recursive filter algorithm is developed to deal with the state estimation problem for power systems with quantized nonlinear measurements. The measurements from both the remote terminal units and the phasor measurement unit are subject to quantizations described by a logarithmic quantizer. Attention is focused on the design of a recursive filter such that, in the simultaneous presence of nonlinear measurements and quantization effects, an upper bound for the estimation error covariance is guaranteed and subsequently minimized. Instead of using the traditional approximation methods in nonlinear estimation that simply ignore the linearization errors, we treat both the linearization and quantization errors as norm-bounded uncertainties in the algorithm development so as to improve the performance of the estimator. For the power system with such kind of introduced uncertainties, a filter is designed in the framework of robust recursive estimation, and the developed filter algorithm is tested on the IEEE benchmark power system to demonstrate its effectiveness.
Resumo:
In this study we calculate the electron-impact uncertainties in atomic data for direct ionization and recombination and investigate the role of these uncertainties on spectral diagnostics. We outline a systematic approach to assigning meaningful uncertainties that vary with electron temperature. Once these uncertainty parameters have been evaluated, we can then calculate the uncertainties on key diagnostics through a Monte Carlo routine, using the Astrophysical Emission Code (APEC) [Smith et al. 2001]. We incorporate these uncertainties into well known temperature diagnostics, such as the Lyman alpha versus resonance line ratio and the G ratio. We compare these calculations to a study performed by [Testa et al. 2004], where significant discrepancies in the two diagnostic ratios were observed. We conclude that while the atomic physics uncertainties play a noticeable role in the discrepancies observed by Testa, they do not explain all of them. This indicates that there is another physical process occurring in the system that is not being taken into account. This work is supported in part by the National Science Foundation REU and Department of Defense ASSURE programs under NSF Grant no. 1262851 and by the Smithsonian Institution.
Resumo:
Electron-impact excitation collision strengths for transitions between all singly excited levels up to the n = 4 shell of helium-Eke argon and the n = 4 and 5 shells of helium-like iron have been calculated using a radiation-damped R-matrix approach. The theoretical collision strengths have been examined and associated with their infinite-energy limit values to allow the preparation of Maxwell-averaged effective collision strengths. These are conservatively considered to be accurate to within 20% at all temperatures, 3 x 10(5)-3 x 10(8) K forAr(16+) and 10(6)-10(9) K for Fe24+. They have been compared with the results of previous studies, where possible, and we find a broad accord. The corresponding rate coefficients are required for use in the calculation of derived, collisional-radiative, effective emission coefficients for helium-like lines for diagnostic application to fusion and astrophysical plasmas. The uncertainties in the fundamental collision data have been used to provide a critical assessment of the expected resultant uncertainties in such derived data, including redistributive and cascade collisional-radiative effects. The consequential uncertainties in the parts of the effective emission coefficients driven by excitation from the ground levels for the key w, x, y and z lines vary between 5% and 10%. Our results remove an uncertainty in the reaction rates of a key class of atomic processes governing the spectral emission of helium-like ions in plasmas.
A comparison of theoretical Mg VI emission line strengths with active-region observations from SERTS
Resumo:
R-matrix calculations of electron impact excitation rates in N-like Mg VI are used to derive theoretical electron-density-sensitive emission line ratios involving 2s22p3 - 2s2p4 transitions in the 269-403 Å wavelength range. A comparison of these with observations of a solar active region, obtained during the 1989 flight of the Solar EUV Rocket Telescope and Spectrograph (SERTS), reveals good agreement between theory and observation for the 2s22p3 4S - 2s2p 4 4p transitions at 399.28, 400.67, and 403.30 Å, and the 2s22p3 2p - 2s2p4 2D lines at 387.77 and 387.97 Å. However, intensities for the other lines attributed to Mg VI in this spectrum by various authors do not match the present theoretical predictions. We argue that these discrepancies are not due to errors in the adopted atomic data, as previously suggested, but rather to observational uncertainties or mis-identifications. Some of the features previously identified as Mg VI lines in the SERTS spectrum, such as 291.36 and 293.15 Å, are judged to be noise, while others (including 349.16 Å) appear to be blended.
Resumo:
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.
Resumo:
Os sistemas compartimentais são frequentemente usados na modelação de diversos processos em várias áreas, tais como a biomedicina, ecologia, farmacocinética, entre outras. Na maioria das aplicações práticas, nomeadamente, aquelas que dizem respeito à administração de drogas a pacientes sujeitos a cirurgia, por exemplo, a presença de incertezas nos parâmetros do sistema ou no estado do sistema é muito comum. Ao longo dos últimos anos, a análise de sistemas compartimentais tem sido bastante desenvolvida na literatura. No entanto, a análise da sensibilidade da estabilidade destes sistemas na presença de incertezas tem recebido muito menos atenção. Nesta tese, consideramos uma lei de controlo por realimentação do estado com restrições de positividade e analisamos a sua robustez quando aplicada a sistemas compartimentais lineares e invariantes no tempo com incertezas nos parâmetros. Além disso, para sistemas lineares e invariantes no tempo com estado inicial desconhecido, combinamos esta lei de controlo com um observador do estado e a robustez da lei de controlo resultante também é analisada. O controlo do bloqueio neuromuscular por meio da infusão contínua de um relaxante muscular pode ser modelado como um sistema compartimental de três compartimentos e tem sido objecto de estudo por diversos grupos de investigação. Nesta tese, os nossos resultados são aplicados a este problema de controlo e são fornecidas estratégias para melhorar os resultados obtidos.
Resumo:
A presente tese investiga o processo de tomada de decisão na gestão de cadeias de abastecimento, utilizando um quadro de análise de opções reais. Especificamente, estudamos tópicos como o nível de inventário ideal para protecção contra a incerteza da procura, o momento para implementação de capacidade flexível em mercados onde existe complexidade no mix de produtos, o tempo para o reforço do factor trabalho visando requisitos de serviço ao mercado, e as decisões entre integração e outsourcing num ambiente de incerteza. Foram usadas metodologias de tempo discreto e contínuo para identificar o valor ideal e o calendário das opções a adoptar, quando a procura é estocástica. Além disso, foram considerados os efeitos dos requisitos dos mercados, como a complexidade na oferta de produtos e o nível de serviço. A procura é representada recorrendo a diferentes processos estocásticos, o impacto de saltos inesperados também é explorado, reforçando a generalização dos modelos a diferentes condições de negócio. A aplicabilidade dos modelos que apresentamos permite a diversificação e o enriquecimento da literatura sobre a abordagem de opções reais, no âmbito das cadeias de abastecimento. Níveis de inventário flexíveis e capacidades flexíveis são característicos das cadeias de abastecimento e podem ser usados como resposta à incerteza do mercado. Esta tese é constituída por ensaios que suportam a aplicação dos modelos, e consiste num capítulo introdutório (designado por ensaio I) e mais seis ensaios sobre factores que discutem o uso de medidas de flexibilidade nas cadeias de abastecimento, em ambientes de incerteza, e um último ensaio sobre a extensão do conceito de flexibilidade ao tratamento da avaliação de planos de negócio. O segundo ensaio que apresentamos é sobre o valor do inventário num único estádio, enquanto medida de flexibilidade, sujeita ao crescente condicionalismo dos custos com posse de activos. Introduzimos uma nova classificação de artigos para suportar o indicador designado por overstock. No terceiro e quarto ensaio ampliamos a exploração do conceito de overstock, promovendo a interacção e o balanceamento entre vários estádios de uma cadeia de abastecimento, como forma de melhorar o desempenho global. Para sustentar a aplicação prática das abordagens, adaptamos o ensaio número três à gestão do desempenho, para suportar o estabelecimento de metas coordenadas e alinhadas; e adaptamos o quarto ensaio à coordenação das cadeias de abastecimento, como auxiliar ao planeamento integrado e sequencial dos níveis de inventário. No ensaio cinco analisamos o factor de produção “tecnologia”, em relação directa com a oferta de produtos de uma empresa, explorando o conceito de investimento, como medida de flexibilidade nas componentes de volume da procura e gama de produtos. Dedicamos o ensaio número seis à análise do factor de produção “Mão-de-Obra”, explorando as condicionantes para aumento do número de turnos na perspectiva económica e determinando o ponto crítico para a tomada de decisão em ambientes de incerteza. No ensaio número sete exploramos o conceito de internalização de operações, demarcando a nossa análise das demais pela definição do momento crítico que suporta a tomada de decisão em ambientes dinâmicos. Complementamos a análise com a introdução de factores temporais de perturbação, nomeadamente, o estádio de preparação necessário e anterior a uma eventual alteração de estratégia. Finalmente, no último ensaio, estendemos a análise da flexibilidade em ambientes de incerteza ao conceito de planos de negócio. Em concreto, exploramos a influência do número de pontos de decisão na flexibilidade de um plano, como resposta à crescente incerteza dos mercados. A título de exemplo, usamos o mecanismo de gestão sequencial do orçamento para suportar o nosso modelo. A crescente incerteza da procura obrigou a um aumento da agilidade e da flexibilidade das cadeias de abastecimento, limitando o uso de muitas das técnicas tradicionais de suporte à gestão, pela incapacidade de incorporarem os efeitos da incerteza. A flexibilidade é claramente uma vantagem competitiva das empresas que deve, por isso, ser quantificada. Com os modelos apresentados e com base nos resultados analisados, pretendemos demonstrar a utilidade da consideração da incerteza nos instrumentos de gestão, usando exemplos numéricos para suportar a aplicação dos modelos, o que claramente promove a aproximação dos desenvolvimentos aqui apresentados às práticas de negócio.