959 resultados para efficient algorithms
Resumo:
BACKGROUND: Several guidelines recommend computed tomography scans for populations with high-risk for lung cancer. The number of individuals evaluated for peripheral pulmonary lesions (PPL) will probably increase, and with it non-surgical biopsies. Associating a guidance method with a target confirmation technique has been shown to achieve the highest diagnostic yield, but the utility of bronchoscopy with radial probe endobronchial ultrasound using fluoroscopy as guidance without a guide sheath has not been reported. METHODS: We conducted a retrospective analysis of bronchoscopy with radial probe endobronchial ultrasound using fluoroscopy procedures for the investigation of PPL performed by experienced bronchoscopists with no specific previous training in this particular technique. Operator learning curves and radiological predictors were assessed for all consecutive patients examined during the first year of application of the technique. RESULTS: Fifty-one PPL were investigated. Diagnostic yield and visualization yield were 72.5 and 82.3% respectively. The diagnostic yield was 64.0% for PPL ≤20mm, and 80.8% for PPL>20mm. No false-positive results were recorded. The learning curve of all diagnostic tools showed a DY of 72.7% for the first sub-group of patients, 81.8% for the second, 72.7% for the third, and 81.8% for the last. CONCLUSION: Bronchoscopy with radial probe endobronchial ultrasound using fluoroscopy as guidance is safe and simple to perform, even without specific prior training, and diagnostic yield is high for PPL>and ≤20mm. Based on these findings, this method could be introduced as a first-line procedure for the investigation of PPL, particularly in centers with limited resources.
Resumo:
Anthropomorphic model observers are mathe- matical algorithms which are applied to images with the ultimate goal of predicting human signal detection and classification accuracy across varieties of backgrounds, image acquisitions and display conditions. A limitation of current channelized model observers is their inability to handle irregularly-shaped signals, which are common in clinical images, without a high number of directional channels. Here, we derive a new linear model observer based on convolution channels which we refer to as the "Filtered Channel observer" (FCO), as an extension of the channelized Hotelling observer (CHO) and the nonprewhitening with an eye filter (NPWE) observer. In analogy to the CHO, this linear model observer can take the form of a single template with an external noise term. To compare with human observers, we tested signals with irregular and asymmetrical shapes spanning the size of lesions down to those of microcalfications in 4-AFC breast tomosynthesis detection tasks, with three different contrasts for each case. Whereas humans uniformly outperformed conventional CHOs, the FCO observer outperformed humans for every signal with only one exception. Additive internal noise in the models allowed us to degrade model performance and match human performance. We could not match all the human performances with a model with a single internal noise component for all signal shape, size and contrast conditions. This suggests that either the internal noise might vary across signals or that the model cannot entirely capture the human detection strategy. However, the FCO model offers an efficient way to apprehend human observer performance for a non-symmetric signal.
Resumo:
Worldwide, about half the adult population is considered overweight as defined by a body mass index (BMI - calculated by body weight divided by height squared) ratio in excess of 25 kg.m-2. Of these individuals, half are clinically obese (with a BMI in excess of 30) and these numbers are still increasing, notably in developing countries such as those of the Middle East region. Obesity is a disorder characterised by increased mass of adipose tissue (excessive fat accumulation) that is the result of a systemic imbalance between food intake and energy expenditure. Although factors such as family history, sedentary lifestyle, urbanisation, income and family diet patterns determine obesity prevalence, the main underlying causes are poor knowledge about food choice and lack of physical activity3. Current obesity treatments include dietary restriction, pharmacological interventions and ultimately, bariatric surgery. The beneficial effects of physical activity on weight loss through increased energy expenditure and appetite modulation are also firmly established. Another viable option to induce a negative energy balance, is to incorporate hypoxia per se or combine it with exercise in an individual's daily schedule. This article will present recent evidence suggesting that combining hypoxic exposure and exercise training might provide a cost-effective strategy for reducing body weight and improving cardio-metabolic health in obese individuals. The efficacy of this approach is further reinforced by epidemiological studies using large-scale databases, which evidence a negative relationship between altitude of habitation and obesity. In the United States, for instance, obesity prevalence is inversely associated with altitude of residence and urbanisation, after adjusting for temperature, diet, physical activity, smoking and demographic factors.
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
In diffusion MRI, traditional tractography algorithms do not recover truly quantitative tractograms and the structural connectivity has to be estimated indirectly by counting the number of fiber tracts or averaging scalar maps along them. Recently, global and efficient methods have emerged to estimate more quantitative tractograms by combining tractography with local models for the diffusion signal, like the Convex Optimization Modeling for Microstructure Informed Tractography (COMMIT) framework. In this abstract, we show the importance of using both (i) proper multi-compartment diffusion models and (ii) adequate multi-shell acquisitions, in order to evaluate the accuracy and the biological plausibility of the tractograms.
Resumo:
X-ray medical imaging is increasingly becoming three-dimensional (3-D). The dose to the population and its management are of special concern in computed tomography (CT). Task-based methods with model observers to assess the dose-image quality trade-off are promising tools, but they still need to be validated for real volumetric images. The purpose of the present work is to evaluate anthropomorphic model observers in 3-D detection tasks for low-contrast CT images. We scanned a low-contrast phantom containing four types of signals at three dose levels and used two reconstruction algorithms. We implemented a multislice model observer based on the channelized Hotelling observer (msCHO) with anthropomorphic channels and investigated different internal noise methods. We found a good correlation for all tested model observers. These results suggest that the msCHO can be used as a relevant task-based method to evaluate low-contrast detection for CT and optimize scan protocols to lower dose in an efficient way.
Resumo:
Peer-reviewed
Resumo:
We expose the ubiquitous interaction between an information screen and its’ viewers mobile devices, highlights the communication vulnerabilities, suggest mitigation strategies and finally implement these strategies to secure the communication. The screen infers information preferences’ of viewers within its vicinity transparently from their mobile devices over Bluetooth. Backend processing then retrieves up-to-date versions of preferred information from content providers. Retrieved content such as sporting news, weather forecasts, advertisements, stock markets and aviation schedules, are systematically displayed on the screen. To maximise users’ benefit, experience and acceptance, the service is provided with no user interaction at the screen and securely upholding preferences privacy and viewers anonymity. Compelled by the personal nature of mobile devices, their contents privacy, preferences confidentiality, and vulnerabilities imposed by screen, the service’s security is fortified. Fortification is predominantly through efficient cryptographic algorithms inspired by elliptic curves cryptosystems, access control and anonymity mechanisms. These mechanisms are demonstrated to attain set objectives within reasonable performance.
Resumo:
This thesis deals with a hardware accelerated Java virtual machine, named REALJava. The REALJava virtual machine is targeted for resource constrained embedded systems. The goal is to attain increased computational performance with reduced power consumption. While these objectives are often seen as trade-offs, in this context both of them can be attained simultaneously by using dedicated hardware. The target level of the computational performance of the REALJava virtual machine is initially set to be as fast as the currently available full custom ASIC Java processors. As a secondary goal all of the components of the virtual machine are designed so that the resulting system can be scaled to support multiple co-processor cores. The virtual machine is designed using the hardware/software co-design paradigm. The partitioning between the two domains is flexible, allowing customizations to the resulting system, for instance the floating point support can be omitted from the hardware in order to decrease the size of the co-processor core. The communication between the hardware and the software domains is encapsulated into modules. This allows the REALJava virtual machine to be easily integrated into any system, simply by redesigning the communication modules. Besides the virtual machine and the related co-processor architecture, several performance enhancing techniques are presented. These include techniques related to instruction folding, stack handling, method invocation, constant loading and control in time domain. The REALJava virtual machine is prototyped using three different FPGA platforms. The original pipeline structure is modified to suit the FPGA environment. The performance of the resulting Java virtual machine is evaluated against existing Java solutions in the embedded systems field. The results show that the goals are attained, both in terms of computational performance and power consumption. Especially the computational performance is evaluated thoroughly, and the results show that the REALJava is more than twice as fast as the fastest full custom ASIC Java processor. In addition to standard Java virtual machine benchmarks, several new Java applications are designed to both verify the results and broaden the spectrum of the tests.
Resumo:
Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.
Resumo:
This dissertation analyses the growing pool of copyrighted works, which are offered to the public using Creative Commons licensing. The study consist of analysis of the novel licensing system, the licensors, and the changes of the "all rights reserved" —paradigm of copyright law. Copyright law reserves all rights to the creator until seventy years have passed since her demise. Many claim that this endangers communal interests. Quite often the creators are willing to release some rights. This, however, is very difficult to do and needs help of specialized lawyers. The study finds that the innovative Creative Commons licensing scheme is well suited for low value - high volume licensing. It helps to reduce transaction costs on several le¬vels. However, CC licensing is not a "silver bullet". Privacy, moral rights, the problems of license interpretation and license compatibility with other open licenses and collecting societies remain unsolved. The study consists of seven chapters. The first chapter introduces the research topic and research questions. The second and third chapters inspect the Creative Commons licensing scheme's technical, economic and legal aspects. The fourth and fifth chapters examine the incentives of the licensors who use open licenses and describe certain open business models. The sixth chapter studies the role of collecting societies and whether two institutions, Creative Commons and collecting societies can coexist. The final chapter summarizes the findings. The dissertation contributes to the existing literature in several ways. There is a wide range of prior research on open source licensing. However, there is an urgent need for an extensive study of the Creative Commons licensing and its actual and potential impact on the creative ecosystem.
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.