968 resultados para Multifractal Products, Multifractal Spectrum, Renyi Function, Stationary Diffusion


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluorescent proteins are valuable tools as biochemical markers for studying cellular processes. Red fluorescent proteins (RFPs) are highly desirable for in vivo applications because they absorb and emit light in the red region of the spectrum where cellular autofluorescence is low. The naturally occurring fluorescent proteins with emission peaks in this region of the spectrum occur in dimeric or tetrameric forms. The development of mutant monomeric variants of RFPs has resulted in several novel FPs known as mFruits. Though oxygen is required for maturation of the chromophore, it is known that photobleaching of FPs is oxygen sensitive, and oxygen-free conditions result in improved photostabilities. Therefore, understanding oxygen diffusion pathways in FPs is important for both photostabilites and maturation of the chromophores. We used molecular dynamics calculations to investigate the protein barrel fluctuations in mCherry, which is one of the most useful monomeric mFruit variants, and its GFP homolog citrine. We employed implicit ligand sampling and locally enhanced sampling to determine oxygen pathways from the bulk solvent into the mCherry chromophore in the interior of the protein. The pathway contains several oxygen hosting pockets, which were identified by the amino acid residues that form the pocket. We calculated the free-energy of an oxygen molecule at points along the path. We also investigated an RFP variant known to be significantly less photostable than mCherry and find much easier oxygen access in this variant. We showed that oxygen pathways can be blocked or altered, and barrel fluctuations can be reduced by strategic amino acid substitutions. The results provide a better understanding of the mechanism of molecular oxygen access into the fully folded mCherry protein barrel and provide insight into the photobleaching process in these proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an economic model of the effects of identity and social norms on consumption patterns. By incorporating qualitative studies in psychology and sociology, I propose a utility function that features two components – economic (functional) and identity elements. This setup is extended to analyze a market comprising a continuum of consumers, whose identity distribution along a spectrum of binary identities is described by a Beta distribution. I also introduce the notion of salience in the context of identity and consumption decisions. The key result of the model suggests that fundamental economic parameters, such as price elasticity and market demand, can be altered by identity elements. In addition, it predicts that firms in perfectly competitive markets may associate their products with certain types of identities, in order to reduce product substitutability and attain price-setting power.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proper balancing of the activities of metabolic pathways to meet the challenge of providing necessary products for biosynthetic and energy demands of the cell is a key requirement for maintaining cell viability and allowing for cell proliferation. Cell metabolism has been found to play a crucial role in numerous cell settings, including in the cells of the immune system, where a successful immune response requires rapid proliferation and successful clearance of dangerous pathogens followed by resolution of the immune response. Additionally, it is now well known that cell metabolism is markedly altered from normal cells in the setting of cancer, where tumor cells rapidly and persistently proliferate. In both settings, alterations to the metabolic profile of the cells play important roles in promoting cell proliferation and survival.

It has long been known that many types of tumor cells and actively proliferating immune cells adopt a metabolic phenotype of aerobic glycolysis, whereby the cell, even under normoxic conditions, imports large amounts of glucose and fluxes it through the glycolytic pathway and produces lactate. However, the metabolic programs utilized by various immune cell subsets have only recently begun to be explored in detail, and the metabolic features and pathways influencing cell metabolism in tumor cells in vivo have not been studied in detail. The work presented here examines the role of metabolism in regulating the function of an important subset of the immune system, the regulatory T cell (Treg) and the role and regulation of metabolism in the context of malignant T cell acute lymphoblastic leukemia (T-ALL). We show that Treg cells, in order to properly function to suppress auto-inflammatory disease, adopt a metabolic program that is characterized by oxidative metabolism and active suppression of anabolic signaling and metabolic pathways. We found that the transcription factor FoxP3, which is highly expressed in Treg cells, drives this phenotype. Perturbing the metabolic phenotype of Treg cells by enforcing increased glycolysis or driving proliferation and anabolic signaling through inflammatory signaling pathways results in a reduction in suppressive function of Tregs.

In our studies focused on the metabolism of T-ALL, we observed that while T-ALL cells use and require aerobic glycolysis, the glycolytic metabolism of T-ALL is restrained compared to that of an antigen activated T cell. The metabolism of T-ALL is instead balanced, with mitochondrial metabolism also being increased. We observed that the pro-anabolic growth mTORC1 signaling pathway was limited in primary T-ALL cells as a result of AMPK pathway activity. AMPK pathway signaling was elevated as a result of oncogene induced metabolic stress. AMPK played a key role in the regulation of T-ALL cell metabolism, as genetic deletion of AMPK in an in vivo murine model of T-ALL resulted in increased glycolysis and anabolic metabolism, yet paradoxically increased cell death and increased mouse survival time. AMPK acts to promote mitochondrial oxidative metabolism in T-ALL through the regulation of Complex I activity, and loss of AMPK reduced mitochondrial oxidative metabolism and resulted in increased metabolic stress. Confirming a role for mitochondrial metabolism in T-ALL, we observed that the direct pharmacological inhibition of Complex I also resulted in a rapid loss of T-ALL cell viability in vitro and in vivo. Taken together, this work establishes an important role for AMPK to both balance the metabolic pathways utilized by T-ALL to allow for cell proliferation and to also promote tumor cell viability by controlling metabolic stress.

Overall, this work demonstrates the importance of the proper coupling of metabolic pathway activity with the function needs of particular types of immune cells. We show that Treg cells, which mainly act to keep immune responses well regulated, adopt a metabolic program where glycolytic metabolism is actively repressed, while oxidative metabolism is promoted. In the setting of malignant T-ALL cells, metabolic activity is surprisingly balanced, with both glycolysis and mitochondrial oxidative metabolism being utilized. In both cases, altering the metabolic balance towards glycolytic metabolism results in negative outcomes for the cell, with decreased Treg functionality and increased metabolic stress in T-ALL. In both cases, this work has generated a new understanding of how metabolism couples to immune cell function, and may allow for selective targeting of immune cell subsets by the specific targeting of metabolic pathways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nurse-led home exercise programme improves physical function for people receiving haemodialysis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le développement de la multirésistance chez Escherichia coli est un problème important en médecine animale et humaine. En outre, l’émergence et la diffusion des déterminants de résistance aux céphalosporines à larges spectres de troisième génération (ESCs) parmi les isolats, incluant des céphalosporines essentielles en médecine humaine (ex. ceftriaxone et ceftiofur), est un problème majeur de santé publique. Cette thèse visait trois objectifs. D’abord étudier la dynamique de la résistance aux antimicrobiens (AMR) ainsi que la virulence et les profils génétiques de la AMR des E. coli isolées de porcs recevant une nourriture post-sevrage supplémentée avec de la chlortétracycline et de la pénicilline G, et, accessoirement, évaluer les effets d'additifs alimentaires sur cette dynamique en prenant pour exemple d'étude un minéral argileux, la clinoptilolite, étant donné son possible lien avec le gène blaCMY-2 qui confère la résistance au ceftiofur. L'objectif suivant était d'investiguer les mécanismes menant à une augmentation de la prévalence du gène blaCMY-2 chez les porcs qui reçoivent de la nourriture médicamentée et qui n'ont pas été exposés au ceftiofur Ici encore,nous avons examiné les effets d’un supplément alimentaire avec un minéral argileux sur ce phénomène. Enfin, notre dernier objectif était d’étudier, dans le temps, les génotypes des isolats cliniques d'E. coli résistant au ceftiofur, isolés de porcs malades au Québec à partir du moment où la résistance au ceftiofur a été rapportée, soit de 1997 jusqu'à 2012. Dans l'étude initiale, la prévalence de la résistance à 10 agents antimicrobiens, incluant le ceftiofur, s’accroît avec le temps chez les E.coli isolées de porcelets sevrés. Une augmentation tardive de la fréquence du gène blaCMY-2, encodant pour la résistance au ceftiofur, et la présence des gènes de virulence iucD et tsh a été observée chez les isolats. La nourriture supplémentée avec de la clinoptilolite a été associée à une augmentation rapide mais, par la suite, à une diminution de la fréquence des gènes blaCMY-2 dans les isolats. En parallèle, une augmentation tardive dans la fréquence des gènes blaCMY-2 et des gènes de virulence iucD et tsh a été observée dans les isolats des porcs contrôles, étant significativement plus élevé que dans les porcs ayant reçu l'additif au jour 28. La diversité, au sein des E. coli positives pour blaCMY-2 , a été observée au regard des profils AMR. Certaines lignées clonales d'E.coli sont devenues prédominantes avec le temps. La lignée clonale du phylotype A prédominait dans le groupe supplémenté, alors que les lignées clonales du phylotype B1, qui possèdent souvent le gène de virulence iucD associé aux ExPEC, prédominaient dans le groupe contrôle. Les plasmides d'incompatibilité (Inc) des groupes, I1, A/C, et ColE, porteurs de blaCMY-2, ont été observés dans les transformants. Parmi les souches cliniques d'E.coli ESC-résistantes, isolées de porcs malades au Québec de 1997 à 2012, blaCMY-2 était le gène codant pour une β-lactamase le plus fréquemment détecté; suivi par blaTEM et blaCTX-M,. De plus, les analyses clonales montrent une grande diversité génétique. Par contre, des isolats d'E. coli avec des profils PFGE identiques ont été retrouvés dans de multiples fermes la même année mais aussi dans des années différentes. La résistance à la gentamicine, kanamycine, chloramphenicol, et la fréquence de blaTEM et de IncA/C diminuent significativement au cour de la période étudiée, alors que la fréquence de IncI1 et de la multirésistance à sept catégories d'agents antimicrobiens augmente significativement avec le temps. L'émergence d'isolats d'E. coli positifs pour blaCTX-M, une β-lactamase à large spectre et produisant des ESBL, a été observée en 2011 et 2012 à partir de lignées clonales distinctes et chez de nombreuses fermes. Ces résultats, mis ensemble, apportent des précisions sur la dissémination de la résistance au ceftiofur dans les E. coli isolées de porcs. Au sein des échantillons prélevés chez les porcs sevrés recevant l'alimentation médicamentée sur une ferme, et pour laquelle une augmentation de la résistance au ceftiofur a été observée, les données révèlent que les souches d'E. coli positives pour blaCMY-2 et résistantes aux ESCs appartenaient à plusieurs lignées clonales différentes arborant divers profils AMR. Le gène blaCMY-2 se répand à la fois horizontalement et clonalement chez ces E. coli. L'ajout de clinoptilotite à la nourriture et le temps après le sevrage influencent la clonalité et la prévalence du gène blaCMY-2 dans les E. coli. Durant les 16 années d'étude, plusieurs lignées clonales différentes ont été observées parmi les souches d'E. coli résistantes au ceftiofur isolées de porc malades de fermes québécoises, bien qu’aucune lignée n'était persistante ou prédominante pendant l'étude. Les résultats suggèrent aussi que le gène blaCMY-2 s'est répandu à la fois horizontalement et clonalement au sein des fermes. De plus, blaCMY-2 est le gène majeur des β-lactamases chez ces isolats. À partir de 2011, nous rapportons l'émergence du gène blaCTX-M dans des lignées génétiques distinctes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past, many papers have been presented which show that the coating of cutting tools often yields decreased wear rates and reduced coefficients of friction. Although different theories are proposed, covering areas such as hardness theory, diffusion barrier theory, thermal barrier theory, and reduced friction theory, most have not dealt with the question of how and why the coating of tool substrates with hard materials such as Titanium Nitride (TiN), Titanium Carbide (TiC) and Aluminium Oxide (Al203) transforms the performance and life of cutting tools. This project discusses the complex interrelationship that encompasses the thermal barrier function and the relatively low sliding friction coefficient of TiN on an undulating tool surface, and presents the result of an investigation into the cutting characteristics and performance of EDMed surface-modified carbide cutting tool inserts. The tool inserts were coated with TiN by the physical vapour deposition (PVD) method. PVD coating is also known as Ion-plating which is the general term of the coating method in which the film is created by attracting ionized metal vapour in this the metal was Titanium and ionized gas onto negatively biased substrate surface. Coating by PVD was chosen because it is done at a temperature of not more than 5000C whereas chemical Vapour Deposition CVD process is done at very high temperature of about 8500C and in two stages of heating up the substrates. The high temperatures involved in CVD affects the strength of the (tool) substrates. In this study, comparative cutting tests using TiN-coated control specimens with no EDM surface structures and TiN-coated EDMed tools with a crater-like surface topography were carried out on mild steel grade EN-3. Various cutting speeds were investigated, up to an increase of 40% of the tool manufacturer’s recommended speed. Fifteen minutes of cutting were carried out for each insert at the speeds investigated. Conventional tool inserts normally have a tool life of approximately 15 minutes of cutting. After every five cuts (passes) microscopic pictures of the tool wear profiles were taken, in order to monitor the progressive wear on the rake face and on the flank of the insert. The power load was monitored for each cut taken using an on-board meter on the CNC machine to establish the amount of power needed for each stage of operation. The spindle drive for the machine is an 11 KW/hr motor. Results obtained confirmed the advantages of cutting at all speeds investigated using EDMed coated inserts, in terms of reduced tool wear and low power loads. Moreover, the surface finish on the workpiece was consistently better for the EDMed inserts. The thesis discusses the relevance of the finite element method in the analysis of metal cutting processes, so that metal machinists can design, manufacture and deliver goods (tools) to the market quickly and on time without going through the hassle of trial and error approach for new products. Improvements in manufacturing technologies require better knowledge of modelling metal cutting processes. Technically the use of computational models has a great value in reducing or even eliminating the number of experiments traditionally used for tool design, process selection, machinability evaluation, and chip breakage investigations. In this work, much interest in theoretical and experimental investigations of metal machining were given special attention. Finite element analysis (FEA) was given priority in this study to predict tool wear and coating deformations during machining. Particular attention was devoted to the complicated mechanisms usually associated with metal cutting, such as interfacial friction; heat generated due to friction and severe strain in the cutting region, and high strain rates. It is therefore concluded that Roughened contact surface comprising of peaks and valleys coated with hard materials (TiN) provide wear-resisting properties as the coatings get entrapped in the valleys and help reduce friction at chip-tool interface. The contributions to knowledge: a. Relates to a wear-resisting surface structure for application in contact surfaces and structures in metal cutting and forming tools with ability to give wear-resisting surface profile. b. Provide technique for designing tool with roughened surface comprising of peaks and valleys covered in conformal coating with a material such as TiN, TiC etc which is wear-resisting structure with surface roughness profile compose of valleys which entrap residual coating material during wear thereby enabling the entrapped coating material to give improved wear resistance. c. Provide knowledge for increased tool life through wear resistance, hardness and chemical stability at high temperatures because of reduced friction at the tool-chip and work-tool interfaces due to tool coating, which leads to reduced heat generation at the cutting zones. d. Establishes that Undulating surface topographies on cutting tips tend to hold coating materials longer in the valleys, thus giving enhanced protection to the tool and the tool can cut faster by 40% and last 60% longer than conventional tools on the markets today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to investigate whether rinsing the mouth with a carbohydrate solution could improve skill-specific fencing performance and cognitive function following a fatigue inducing simulated bout of fencing in epee fencers. Eleven healthy, competitive epee fencers (three female; eight male; 33.9 ± 14.7 years; body mass 79 ± 16 kg; height 162 ± 54 cm) volunteered to participant in a single-blind crossover design study. During visit 1 participants completed a 1-minute lunge test and stroop test pre and post fatigue inducing fencing protocol. A 30 second electroencephalography (EEG) recording was taken pre-protocol participants were instructed stay in a seated stationary position with their eyes closed. Heart rate and ratings of perceived exertion were recorded following each fight during the fatiguing protocol. Participants mouth rinsed (10 seconds) either 25ml of a 6.7% maltodextrin solution (CHO) or 25ml of water (placebo) between fights and during the EEG recording. Blood lactate and glucose measurements were taken at baseline, pre and post protocol. All measurements and tests were repeated during a 2nd visit to the laboratory, except participants were given a different solution to mouth rinse, separated by a minimum of 5 days. The results showed an increase in heart rate (P < 0.05) and overall RPE (P < 0.001) over time in both trials. There were no recorded differences in blood glucose (F(1,8) = 0.634, P = 0.4, ηp 0.07) or blood lactate levels (F(1,8) = 0.123, P = 0.7, ηp 0.01) between trials. There was a significant improvement in lunge test accuracy in the CHO trial (F(1,8) = 5.214, P = 0.05, ηp 0.40). However, there was no recorded difference in response time to congruent (F(1,8) = 0.326, P = 0.58, ηp 0.04) or incongruent (F(1,8) = 0.189, P = 0.68, ηp 0.02) stimuli between trials. In conclusion mouth rinsing a CHO solution significantly improves accuracy of skill-specific fencing performance but does not affect cognitive function following a fatigue inducing fencing protocol in epee fencers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of amino groups and carbonyls renders fortified milk with ascorbic acid particularly susceptible to the reduction of available lysine and to the formation of Maillard reaction products (MRPs), as Nε-(Carboxyethyl)-L-lysine (CEL), Nε-(Carboxymethyl)-L-lysine (CML), Amadori products (APs) and off-flavors. A novel approach was proposed to control the Maillard reaction (MR) in fortified milk: ascorbic acid was encapsulated in a lipid coating and the effects were tested after a lab scale UHT treatment. Encapsulation promoted a delayed release of ascorbic acid and a reduction in the formation of MRPs. Total lysine increased up to 45% in milk with encapsulated ascorbic acid, while reductions in CML, CEL and furosine ranged from 10% to 53% compared with control samples. The effects were also investigated towards the formation of amide-AGEs (advanced glycation end products) by high resolution mass spectrometry (HRMS) revealing that several mechanisms coincide with the MR in the presence of ascorbic acid (AA).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to develop an application capable of determining the diffusion times and diffusion coefficients of optical clearing agents and water inside a known type of muscle. Different types of chemical agents can also be used with the method implemented, such as medications or metabolic products. Since the diffusion times can be calculated, it is possible to describe the dehydration mechanism that occurs in the muscle. The calculation of the diffusion time of an optical clearing agent allows to characterize the refractive index matching mechanism of optical clearing. By using both the diffusion times and diffusion of water and clearing agents not only the optical clearing mechanisms are characterized, but also information about optical clearing effect duration and magnitude is obtained. Such information is crucial to plan a clinical intervention in cooperation with optical clearing. The experimental method and equations implemented in the developed application are described in throughout this document, demonstrating its effectiveness. The application was developed in MATLAB code, but the method was personalized so it better fits the application needs. This process significantly improved the processing efficiency, reduced the time to obtain he results, multiple validations prevents common errors and some extra functionalities were added such as saving application progress or export information in different formats. Tests were made using glucose measurements in muscle. Some of the data, for testing purposes, was also intentionally changed in order to obtain different simulations and results from the application. The entire project was validated by comparing the calculated results with the ones found in literature, which are also described in this document.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediment oxygen demand (SOD) can be a significant oxygen sink in various types of water bodies, particularly slow-moving waters with substantial organic sediment accumulation. In most settings where SOD is a concern, the prevailing hydraulic conditions are such that the impact of sediment resuspension on SOD is not considered. However, in the case of Bubbly Creek in Chicago, Illinois, the prevailing slack water conditions are interrupted by infrequent intervals of very high flow rates associated with pumped combined sewer overflow (CSO) during intense hydrologic events. These events can cause resuspension of the highly organic, nutrient-rich bottom sediments, resulting in precipitous drawdown of dissolved oxygen (DO) in the water column. While many past studies have addressed the dependence of SOD on near-bed velocity and bed shear stress prior to the point of sediment resuspension, there has been limited research that has attempted to characterize the complex and dynamic phenomenon of resuspended-sediment oxygen demand. To address this issue, a new in situ experimental apparatus referred to as the U of I Hydrodynamic SOD Sampler was designed to achieve a broad range of velocities and associated bed shear stresses. This allowed SOD to be analyzed across the spectrum of no sediment resuspension associated with low velocity/ bed shear stress through full sediment resuspension associated with high velocity / bed shear stress. The current study split SOD into two separate components: (1) SODNR is the sediment oxygen demand associated with non-resuspension conditions and is a surface sink calculated using traditional methods to yield a value with units (g/m2/day); and (2) SODR is the oxygen demand associated with resuspension conditions, which is a volumetric sink most accurately characterized using non-traditional methods and units that reflect suspension in the water column (mg/L/day). In the case of resuspension, the suspended sediment concentration was analyzed as a function of bed shear stress, and a formulation was developed to characterize SODR as a function of suspended sediment concentration in a form similar to first-order biochemical oxygen demand (BOD) kinetics with Monod DO term. The results obtained are intended to be implemented into a numerical model containing hydrodynamic, sediment transport, and water quality components to yield oxygen demand varying in both space and time for specific flow events. Such implementation will allow evaluation of proposed Bubbly Creek water quality improvement alternatives which take into account the impact of SOD under various flow conditions. Although the findings were based on experiments specific to the conditions in Bubbly Creek, the techniques and formulations developed in this study should be applicable to similar sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de Mestrado, Ciências Biomédicas, 28 de Junho de 2016, Universidade dos Açores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.