914 resultados para Ambiguous pact of lecture.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.
Resumo:
This paper addresses sensor network applications which need to obtain an accurate image of physical phenomena and do so with a high sampling rate in both time and space. We present a fast and scalable approach for obtaining an approximate representation of all sensor readings at high sampling rate for quickly reacting to critical events in a physical environment. This approach is an improvement on previous work in that after the new approach has undergone a startup phase then the new approach can use a very small sampling period.
Resumo:
Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take individual sensor readings, however, in many cases, it is relevant to compute aggregated quantities of these readings. In fact, the minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. In this context, we propose an algorithm for computing the min or max of sensor readings in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.
Resumo:
Though the formal mathematical idea of introducing noninteger order derivatives can be traced from the 17th century in a letter by L’Hospital in which he asked Leibniz what the meaning of D n y if n = 1/2 would be in 1695 [1], it was better outlined only in the 19th century [2, 3, 4]. Due to the lack of clear physical interpretation their first applications in physics appeared only later, in the 20th century, in connection with visco-elastic phenomena [5, 6]. The topic later obtained quite general attention [7, 8, 9], and also found new applications in material science [10], analysis of earth-quake signals [11], control of robots [12], and in the description of diffusion [13], etc.
Resumo:
A robot’s drive has to exert appropriate driving forces that can keep its arm and end effector at the proper position, velocity and acceleration, and simultaneously has to compensate for the effects of the contact forces arising between the tool and the workpiece depending on the needs of the actual technological operation. Balancing the effects of a priori unknown external disturbance forces and the inaccuracies of the available dynamic model of the robot is also important. Technological tasks requiring well prescribed end effector trajectories and contact forces simultaneously are challenging control problems that can be tackled in various manners.
Resumo:
The Janssen-Cilag proposal for a risk-sharing agreement regarding bortezomib received a welcome signal from NICE. The Office of Fair Trading report included risk-sharing agreements as an available tool for the National Health Service. Nonetheless, recent discussions have somewhat neglected the economic fundamentals underlying risk-sharing agreements. We argue here that risk-sharing agreements, although attractive due to the principle of paying by results, also entail risks. Too many patients may be put under treatment even with a low success probability. Prices are likely to be adjusted upward, in anticipation of future risk-sharing agreements between the pharmaceutical company and the third-party payer. An available instrument is a verification cost per patient treated, which allows obtaining the first-best allocation of patients to the new treatment, under the risk sharing agreement. Overall, the welfare effects of risk-sharing agreements are ambiguous, and care must be taken with their use.
Resumo:
This paper describes the implementation of a distributed model predictive approach for automatic generation control. Performance results are discussed by comparing classical techniques (based on integral control) with model predictive control solutions (centralized and distributed) for different operational scenarios with two interconnected networks. These scenarios include variable load levels (ranging from a small to a large unbalance generated power to power consumption ratio) and simultaneously variable distance between the interconnected networks systems. For the two networks the paper also examines the impact of load variation in an island context (a network isolated from each other).
Resumo:
This paper proposes and reports the development of an open source solution for the integrated management of Infrastructure as a Service (IaaS) cloud computing resources, through the use of a common API taxonomy, to incorporate open source and proprietary platforms. This research included two surveys on open source IaaS platforms (OpenNebula, OpenStack and CloudStack) and a proprietary platform (Parallels Automation for Cloud Infrastructure - PACI) as well as on IaaS abstraction solutions (jClouds, Libcloud and Deltacloud), followed by a thorough comparison to determine the best approach. The adopted implementation reuses the Apache Deltacloud open source abstraction framework, which relies on the development of software driver modules to interface with different IaaS platforms, and involved the development of a new Deltacloud driver for PACI. The resulting interoperable solution successfully incorporates OpenNebula, OpenStack (reuses pre-existing drivers) and PACI (includes the developed Deltacloud PACI driver) nodes and provides a Web dashboard and a Representational State Transfer (REST) interface library. The results of the exchanged data payload and time response tests performed are presented and discussed. The conclusions show that open source abstraction tools like Deltacloud allow the modular and integrated management of IaaS platforms (open source and proprietary), introduce relevant time and negligible data overheads and, as a result, can be adopted by Small and Medium-sized Enterprise (SME) cloud providers to circumvent the vendor lock-in problem whenever service response time is not critical.
Resumo:
As e-learning gradually evolved many specialized and disparate systems appeared to fulfil the needs of teachers and students, such as repositories of learning objects, authoring tools, intelligent tutors and automatic evaluators. This heterogeneity raises interoperability issues giving the standardization of content an important role in e-learning. This article presents a survey on current e-learning content aggregation standards focusing on their internal organization and packaging. This study is part of an effort to choose the most suitable specifications and standards for an e-learning framework called Ensemble defined as a conceptual tool to organize a network of e-learning systems and services for domains with complex evaluation.
Resumo:
In this paper we address the problem of computing multiple roots of a system of nonlinear equations through the global optimization of an appropriate merit function. The search procedure for a global minimizer of the merit function is carried out by a metaheuristic, known as harmony search, which does not require any derivative information. The multiple roots of the system are sequentially determined along several iterations of a single run, where the merit function is accordingly modified by penalty terms that aim to create repulsion areas around previously computed minimizers. A repulsion algorithm based on a multiplicative kind penalty function is proposed. Preliminary numerical experiments with a benchmark set of problems show the effectiveness of the proposed method.
Resumo:
Electricity markets are complex environments with very particular characteristics. A critical issue concerns the constant changes they are subject to. This is a result of the electricity markets’ restructuring, performed so that the competitiveness could be increased, but with exponential implications in the increase of the complexity and unpredictability in those markets’ scope. The constant growth in markets unpredictability resulted in an amplified need for market intervenient entities in foreseeing market behavior. The need for understanding the market mechanisms and how the involved players’ interaction affects the outcomes of the markets, contributed to the growth of usage of simulation tools. Multi-agent based software is particularly well fitted to analyze dynamic and adaptive systems with complex interactions among its constituents, such as electricity markets. This paper presents the Multi-Agent System for Competitive Electricity Markets (MASCEM) – a simulator based on multi-agent technology that provides a realistic platform to simulate electricity markets, the numerous negotiation opportunities and the participating entities.
Resumo:
Nowadays, Short Sea Shipping (SSS) is an essential part in European multi-modal transport system, representing approximately thirty-seven per cent of intra-Community transactions in tonnes per kilometre (tkm). Since 2001, the European Shortsea Network (ESN) in partnership with the short-sea Promotion Centres (SPC) of each Member State of the European Union (EU) have managed to make significant progress in the promotion and development of this mode of transport. This paper aims to assess and analyse the SSS of containerised goods in Portugal and its articulation with other EU routes and also other transport modes. The current SSS infrastructure, how the sector is organized, as well as the future perspectives for the sector are also analysed for the case of Portugal. The analyses are based on a survey that was carried out on the logistics operators, navigation agents, freight forwarders, and the leading imports and exports manufacturers in Portugal.