987 resultados para Synthesis framework


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A porous, high surface area TiO2 with anatase or rutile crystalline domains is advantageous for high efficiency photonic devices. Here, we report a new route to the synthesis of mesoporous titania with full anatase crystalline domains. This route involves the preparation of anatase nanocrystalline seed suspensions as the titania precursor and a block copolymer surfactant, Pluronic P123 as the template for the hydrothermal self-assembly process. A large pore (7 - 8 nm) mesoporous titania with a high surface area of 106 - 150 m(2)/g after calcination at 400degreesC for 4 h in air is achieved. Increasing the hydrothermal temperature decreases the surface area and creates larger pores. Characteristics of the seed precursors as well as the resultant mesoporous titania powder were studied using XRD analysis, N-2-adsorption/desorption analysis, and TEM. We believe these materials will be especially useful for photoelectrochemical solar cell and photocatalysis applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There is a wide agreement that identity is a multidisciplinary concept. The authors consider this an opportunity do develop a framework to assess identity. In a marketing context, literature reveals two approaches on identity: one focus on corporate identity and the other focus on branding. The aim of this paper is to integrate these two approaches to develop a synthesis framework to assess brand identity. Based on literature on identity the authors found nine components related to brand identity. Those components are described in this paper as well as the relation they have with brand identity. The authors hope that this synthesis approach contributes to a better understanding of the brand identity, and are very encouraging for refining this framework in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work, nanoporous nickel oxide was synthesized using anionic surfactant assembly method. Structure characterizations show that this nickel oxide possesses partly-ordered mesoporous structure with nanocrystalline pore wall. The formation mechanism of wormlike nanoporous structure is ascribed to the quasi-reverse micelle system formed by ternary phases of SDS (sodium dodecyl sulfate)/urea/water. Cyclic voltammetry shows that these nickel oxide samples possess both good capacitive behavior due to its unique nanoporous structure and very high specific capacitance due to its high surface area with electrochemical activity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background - This review provides a worked example of ‘best fit’ framework synthesis using the Theoretical Domains Framework (TDF) of health psychology theories as an a priori framework in the synthesis of qualitative evidence. Framework synthesis works best with ‘policy urgent’ questions. Objective - The review question selected was: what are patients’ experiences of prevention programmes for cardiovascular disease (CVD) and diabetes? The significance of these conditions is clear: CVD claims more deaths worldwide than any other; diabetes is a risk factor for CVD and leading cause of death. Method - A systematic review and framework synthesis were conducted. This novel method for synthesizing qualitative evidence aims to make health psychology theory accessible to implementation science and advance the application of qualitative research findings in evidence-based healthcare. Results - Findings from 14 original studies were coded deductively into the TDF and subsequently an inductive thematic analysis was conducted. Synthesized findings produced six themes relating to: knowledge, beliefs, cues to (in)action, social influences, role and identity, and context. A conceptual model was generated illustrating combinations of factors that produce cues to (in)action. This model demonstrated interrelationships between individual (beliefs and knowledge) and societal (social influences, role and identity, context) factors. Conclusion - Several intervention points were highlighted where factors could be manipulated to produce favourable cues to action. However, a lack of transparency of behavioural components of published interventions needs to be corrected and further evaluations of acceptability in relation to patient experience are required. Further work is needed to test the comprehensiveness of the TDF as an a priori framework for ‘policy urgent’ questions using ‘best fit’ framework synthesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Symbolic execution is a powerful program analysis technique, but it is very challenging to apply to programs built using event-driven frameworks, such as Android. The main reason is that the framework code itself is too complex to symbolically execute. The standard solution is to manually create a framework model that is simpler and more amenable to symbolic execution. However, developing and maintaining such a model by hand is difficult and error-prone. We claim that we can leverage program synthesis to introduce a high-degree of automation to the process of framework modeling. To support this thesis, we present three pieces of work. First, we introduced SymDroid, a symbolic executor for Android. While Android apps are written in Java, they are compiled to Dalvik bytecode format. Instead of analyzing an app’s Java source, which may not be available, or decompiling from Dalvik back to Java, which requires significant engineering effort and introduces yet another source of potential bugs in an analysis, SymDroid works directly on Dalvik bytecode. Second, we introduced Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket takes as input the framework API and tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of design patterns, Pasket synthesizes an executable framework model by instantiating design patterns, such that the behavior of a synthesized model on the tutorial programs matches that of the original framework. Lastly, in order to scale program synthesis to framework models, we devised adaptive concretization, a novel program synthesis algorithm that combines the best of the two major synthesis strategies: symbolic search, i.e., using SAT or SMT solvers, and explicit search, e.g., stochastic enumeration of possible solutions. Adaptive concretization parallelizes multiple sub-synthesis problems by partially concretizing highly influential unknowns in the original synthesis problem. Thanks to adaptive concretization, Pasket can generate a large-scale model, e.g., thousands lines of code. In addition, we have used an Android model synthesized by Pasket and found that the model is sufficient to allow SymDroid to execute a range of apps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel conotoxin belonging to the 'four-loop' structural class has been isolated from the venom of the piscivorous cone snail Conus tulipa. It was identified using a chemical-directed strategy based largely on mass spectrometric techniques. The new toxin, conotoxin TVIIA, consists of 30 amino-acid residues and contains three disulfide bonds. The amino-acid sequence was determined by Edman analysis as SCSGRDSRCOOVCCMGLMCSRGKCVSIYGE where O = 4-transl-hydroxyproline. Two under-hydroxylated analogues, [Pro10]TVIIA and [Pro10,11]TVIIA, were also identified in the venom of C. tulipa. The sequences of TVIIA and [Pro10]TVIIA were further verified by chemical synthesis and coelution studies with native material. Conotoxin TVIIA has a six cysteine/four-loop structural framework common to many peptides from Conus venoms including the omega-, delta- and kappa-conotoxins. However, TVIIA displays little sequence homology with these well-characterized pharmacological classes of peptides, but displays striking sequence homology with conotoxin GS, a peptide from Conus geographus that blocks skeletal muscle sodium channels. These new toxins and GS share several biochemical features and represent a distinct subgroup of the four-loop conotoxins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a widely held paradigm that mangroves are critical for sustaining production in coastal fisheries through their role as important nursery areas for fisheries species. This paradigm frequently forms the basis for important management decisions on habitat conservation and restoration of mangroves and other coastal wetlands. This paper reviews the current status of the paradigm and synthesises the information on the processes underlying these potential links. In the past, the paradigm has been supported by studies identifying correlations between the areal and linear extent of mangroves and fisheries catch. This paper goes beyond the correlative approach to develop a new framework on which future evaluations can be based. First, the review identifies what type of marine animals are using mangroves and at what life stages. These species can be categorised as estuarine residents, marine-estuarine species and marine stragglers. The marine-estuarine category includes many commercial species that use mangrove habitats as nurseries. The second stage is to determine why these species are using mangroves as nurseries. The three main proposals are that mangroves provide a refuge from predators, high levels of nutrients and shelter from physical disturbances. The recognition of the important attributes of mangrove nurseries then allows an evaluation of how changes in mangroves will affect the associated fauna. Surprisingly few studies have addressed this question. Consequently, it is difficult to predict how changes in any of these mangrove attributes would affect the faunal communities within them and, ultimately, influence the fisheries associated with them. From the information available, it seems likely that reductions in mangrove habitat complexity would reduce the biodiversity and abundance of the associated fauna, and these changes have the potential to cause cascading effects at higher trophic levels with possible consequences for fisheries. Finally, there is a discussion of the data that are currently available on mangrove distribution and fisheries catch, the limitations of these data and how best to use the data to understand mangrove-fisheries links and, ultimately, to optimise habitat and fisheries management. Examples are drawn from two relatively data-rich regions, Moreton Bay (Australia) and Western Peninsular Malaysia, to illustrate the data needs and research requirements for investigating the mangrove-fisheries paradigm. Having reliable and accurate data at appropriate spatial and temporal scales is crucial for mangrove-fisheries investigations. Recommendations are made for improvements to data collection methods that would meet these important criteria. This review provides a framework on which to base future investigations of mangrove-fisheries links, based on an understanding of the underlying processes and the need for rigorous data collection. Without this information, the understanding of the relationship between mangroves and fisheries will remain limited. Future investigations of mangrove-fisheries links must take this into account in order to have a good ecological basis and to provide better information and understanding to both fisheries and conservation managers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work concerns a new synthesis approach to prepare niobium based SAPO materials with AEL structure and the characterization ofNb species incorporated within the inorganic matrixes. The SAPO-11 materials were synthesized with or without the help of a small amine, methylamine (MA) as co-template, while Nb was added directly during the preparation of the initial gel. Structural, textural and acidic properties of the different supports were evaluated by XRD, TPR, UV-Vis spectroscopy, pyridine adsorption followed by IR spectroscopy and thermal analyses. Pure and well crystalline Nb based SAPO-11 materials were obtained, either with or without MA, using in the initial gel a low Si content of about 0.6. Increasing the Si content of the gel up to 0.9 led to an important decrease of the samples crystallinity. Niobium was found to incorporate the AEL pores support as small Nb2O5 oxide particles and also as extra framework cationic species (Nb5+), compensating the negative charges from the matrix and generating new Lewis acid sites. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON), Vienna, Austria, Nov 10-14, 2013