914 resultados para Canonical number systems
Resumo:
We thank Dr. R. Yang (formerly at ASU), Dr. R.-Q. Su (formerly at ASU), and Mr. Zhesi Shen for their contributions to a number of original papers on which this Review is partly based. This work was supported by ARO under Grant No. W911NF-14-1-0504. W.-X. Wang was also supported by NSFC under Grants No. 61573064 and No. 61074116, as well as by the Fundamental Research Funds for the Central Universities, Beijing Nova Programme.
Resumo:
This article is protected by copyright. All rights reserved. The authors appreciate the kind assistance of Miriam Lerner (ImmunArray Ltd. Company, Rehovot, Israel) with experiments involving the MicroGrid II arrayer. This research was supported by a grant (No. 1349) to EAB also from the Israel Science Foundation (ISF) and a grant (No. 24/11) issued to RL by The Sidney E. Frank Foundation also through the ISF. Additional support was obtained from the establishment of an Israeli Center of Research Excellence (I-CORE Center No. 152/11) managed by the Israel Science Foundation, from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel, by the Weizmann Institute of Science Alternative Energy Research Initiative (AERI) and the Helmsley Foundation. The authors also appreciate the support of the European Union, Area NMP.2013.1.1-2: Self-assembly of naturally occurring nanosystems: CellulosomePlus Project number: 604530 and an ERA-IB Consortium (EIB.12.022), acronym FiberFuel. HF and SHD acknowledge support from the Scottish Government Food Land and People programme and from BBSRC grant no. BB/L009951/1. In addition, EAB is grateful for a grant from the F. Warren Hellman Grant for Alternative Energy Research in Israel in support of alternative energy research in Israel administered by the Israel Strategic Alternative Energy Foundation (I-SAEF). E.A.B. is the incumbent of The Maynard I. and Elaine Wishner Chair of Bio-organic Chemistry
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
This dissertation documents the results of a theoretical and numerical study of time dependent storage of energy by melting a phase change material. The heating is provided along invading lines, which change from single-line invasion to tree-shaped invasion. Chapter 2 identifies the special design feature of distributing energy storage in time-dependent fashion on a territory, when the energy flows by fluid flow from a concentrated source to points (users) distributed equidistantly on the area. The challenge in this chapter is to determine the architecture of distributed energy storage. The chief conclusion is that the finite amount of storage material should be distributed proportionally with the distribution of the flow rate of heating agent arriving on the area. The total time needed by the source stream to ‘invade’ the area is cumulative (the sum of the storage times required at each storage site), and depends on the energy distribution paths and the sequence in which the users are served by the source stream. Chapter 3 shows theoretically that the melting process consists of two phases: “invasion” thermal diffusion along the invading line, which is followed by “consolidation” as heat diffuses perpendicularly to the invading line. This chapter also reports the duration of both phases and the evolution of the melt layer around the invading line during the two-dimensional and three-dimensional invasion. It also shows that the amount of melted material increases in time according to a curve shaped as an S. These theoretical predictions are validated by means of numerical simulations in chapter 4. This chapter also shows that the heat transfer rate density increases (i.e., the S curve becomes steeper) as the complexity and number of degrees of freedom of the structure are increased, in accord with the constructal law. The optimal geometric features of the tree structure are detailed in this chapter. Chapter 5 documents a numerical study of time-dependent melting where the heat transfer is convection dominated, unlike in chapter 3 and 4 where the melting is ruled by pure conduction. In accord with constructal design, the search is for effective heat-flow architectures. The volume-constrained improvement of the designs for heat flow begins with assuming the simplest structure, where a single line serves as heat source. Next, the heat source is endowed with freedom to change its shape as it grows. The objective of the numerical simulations is to discover the geometric features that lead to the fastest melting process. The results show that the heat transfer rate density increases as the complexity and number of degrees of freedom of the structure are increased. Furthermore, the angles between heat invasion lines have a minor effect on the global performance compared to other degrees of freedom: number of branching levels, stem length, and branch lengths. The effect of natural convection in the melt zone is documented.
Resumo:
Background
Postpartum hemorrhage is the most significant contributor to maternal mortality globally, claiming 140,000 lives annually. Postpartum hemorrhage is a leading cause of maternal death in South Africa, with the literature indicating that 80 percent of the postpartum hemorrhage deaths in South Africa are avoidable. Ghana, as of 2010, witnesses 2700 maternal deaths annually, primarily because of poor quality of care in health facilities and services being difficult to access. As per WHO recommendations, uterotonics are integral to treating postpartum hemorrhage as soon as it is diagnosed. In case of persistent bleeding or limited availability of uterotonics, the uterine balloon tamponade (UBT) can be used as a second line of defense. If both these measures are unable to counter the bleeding, providers must perform surgical interventions. Literature on the UBT, as one tool in the protocol to address postpartum hemorrhage, has shown it to have success rates ranging from 60 to 100 percent. Despite the potential to lower the number of postpartum hemorrhage deaths in South Africa and Ghana, the UBT has not been incorporated widely in South Africa and Ghana. The aim of this study is to describe the barriers involved with integrating the UBT into South Africa and Ghana’s health systems to address postpartum hemorrhage.
Methods
The study took place in multiple sites in South Africa (Cape Town, Johannesburg, Durban and Mpumalanga) and in Accra, Ghana. South Africa and Ghana were selected because postpartum hemorrhage contributes greatly to their maternal mortality numbers and there is potential in both countries to lower those rates through greater use of the UBT. A total of 25 participants were interviewed through purposive sampling, snowball sampling and participant referrals, and included various categories of stakeholders integral to the integration process of a medical device. Individual in-depth interviews were used for data collection, with interview questions being tailored to each stakeholder category. The focus of the interviews was on the protocol used to counter postpartum hemorrhage, the frequency with which the UBT is used as part of the protocol, and the process of integrating it into the South Africa and Ghana’s health systems. The data collected were coded using NVivo and analyzed using content analysis.
Results
The barriers to integration of the uterine balloon tamponade to address postpartum hemorrhage in South Africa and Ghana were evident on the political, economic and health delivery levels. The results indicated that the barriers to integration in South Africa included the low recognition of postpartum hemorrhage as a problem, the lack of clarity surrounding the role of the Medicines Control Council as a regulatory body for medical devices, and low awareness of the UBT as an intervention to control postpartum hemorrhage. The barriers in Ghana were the cash constraints experienced by the Ghana Health Services to fund medical devices, a heavy reliance on donors for funding, and the lack of consistent knowledge on processes involving clinical trials for new medical devices in Ghana.
Conclusion
Existing literature on methods to counter postpartum hemorrhage to reduce maternal mortality has focused on and emphasized the efficacy of the UBT. Despite overwhelming evidence supporting the use of the UBT, many health systems across the world, particularly low-income countries, do not have access to the device owing to numerous barriers in integrating the device into obstetric care. This study illustrates the need to focus on incorporating the UBT into health systems for greater availability to health workers and its use as standard of care. Ultimately, this study can be used as a stepping-stone for more research on this subject, providing evidence to influence policymakers to integrate the UBT into their protocols for postpartum hemorrhage response.
Resumo:
La diminution des doses administrées ou même la cessation complète d'un traitement chimiothérapeutique est souvent la conséquence de la réduction du nombre de neutrophiles, qui sont les globules blancs les plus fréquents dans le sang. Cette réduction dans le nombre absolu des neutrophiles, aussi connue sous le nom de myélosuppression, est précipitée par les effets létaux non spécifiques des médicaments anti-cancéreux, qui, parallèlement à leur effet thérapeutique, produisent aussi des effets toxiques sur les cellules saines. Dans le but d'atténuer cet impact myélosuppresseur, on administre aux patients un facteur de stimulation des colonies de granulocytes recombinant humain (rhG-CSF), une forme exogène du G-CSF, l'hormone responsable de la stimulation de la production des neutrophiles et de leurs libération dans la circulation sanguine. Bien que les bienfaits d'un traitement prophylactique avec le G-CSF pendant la chimiothérapie soient bien établis, les protocoles d'administration demeurent mal définis et sont fréquemment déterminés ad libitum par les cliniciens. Avec l'optique d'améliorer le dosage thérapeutique et rationaliser l'utilisation du rhG-CSF pendant le traitement chimiothérapeutique, nous avons développé un modèle physiologique du processus de granulopoïèse, qui incorpore les connaissances actuelles de pointe relatives à la production des neutrophiles des cellules souches hématopoïétiques dans la moelle osseuse. À ce modèle physiologique, nous avons intégré des modèles pharmacocinétiques/pharmacodynamiques (PK/PD) de deux médicaments: le PM00104 (Zalypsis®), un médicament anti-cancéreux, et le rhG-CSF (filgrastim). En se servant des principes fondamentaux sous-jacents à la physiologie, nous avons estimé les paramètres de manière exhaustive sans devoir recourir à l'ajustement des données, ce qui nous a permis de prédire des données cliniques provenant de 172 patients soumis au protocol CHOP14 (6 cycles de chimiothérapie avec une période de 14 jours où l'administration du rhG-CSF se fait du jour 4 au jour 13 post-chimiothérapie). En utilisant ce modèle physio-PK/PD, nous avons démontré que le nombre d'administrations du rhG-CSF pourrait être réduit de dix (pratique actuelle) à quatre ou même trois administrations, à condition de retarder le début du traitement prophylactique par le rhG-CSF. Dans un souci d'applicabilité clinique de notre approche de modélisation, nous avons investigué l'impact de la variabilité PK présente dans une population de patients, sur les prédictions du modèle, en intégrant des modèles PK de population (Pop-PK) des deux médicaments. En considérant des cohortes de 500 patients in silico pour chacun des cinq scénarios de variabilité plausibles et en utilisant trois marqueurs cliniques, soient le temps au nadir des neutrophiles, la valeur du nadir, ainsi que l'aire sous la courbe concentration-effet, nous avons établi qu'il n'y avait aucune différence significative dans les prédictions du modèle entre le patient-type et la population. Ceci démontre la robustesse de l'approche que nous avons développée et qui s'apparente à une approche de pharmacologie quantitative des systèmes (QSP). Motivés par l'utilisation du rhG-CSF dans le traitement d'autres maladies, comme des pathologies périodiques telles que la neutropénie cyclique, nous avons ensuite soumis l'étude du modèle au contexte des maladies dynamiques. En mettant en évidence la non validité du paradigme de la rétroaction des cytokines pour l'administration exogène des mimétiques du G-CSF, nous avons développé un modèle physiologique PK/PD novateur comprenant les concentrations libres et liées du G-CSF. Ce nouveau modèle PK a aussi nécessité des changements dans le modèle PD puisqu’il nous a permis de retracer les concentrations du G-CSF lié aux neutrophiles. Nous avons démontré que l'hypothèse sous-jacente de l'équilibre entre la concentration libre et liée, selon la loi d'action de masse, n'est plus valide pour le G-CSF aux concentrations endogènes et mènerait en fait à la surestimation de la clairance rénale du médicament. En procédant ainsi, nous avons réussi à reproduire des données cliniques obtenues dans diverses conditions (l'administration exogène du G-CSF, l'administration du PM00104, CHOP14). Nous avons aussi fourni une explication logique des mécanismes responsables de la réponse physiologique aux deux médicaments. Finalement, afin de mettre en exergue l’approche intégrative en pharmacologie adoptée dans cette thèse, nous avons démontré sa valeur inestimable pour la mise en lumière et la reconstruction des systèmes vivants complexes, en faisant le parallèle avec d’autres disciplines scientifiques telles que la paléontologie et la forensique, où une approche semblable a largement fait ses preuves. Nous avons aussi discuté du potentiel de la pharmacologie quantitative des systèmes appliquées au développement du médicament et à la médecine translationnelle, en se servant du modèle physio-PK/PD que nous avons mis au point.
Resumo:
Over the past few years, logging has evolved from from simple printf statements to more complex and widely used logging libraries. Today logging information is used to support various development activities such as fixing bugs, analyzing the results of load tests, monitoring performance and transferring knowledge. Recent research has examined how to improve logging practices by informing developers what to log and where to log. Furthermore, the strong dependence on logging has led to the development of logging libraries that have reduced the intricacies of logging, which has resulted in an abundance of log information. Two recent challenges have emerged as modern software systems start to treat logging as a core aspect of their software. In particular, 1) infrastructural challenges have emerged due to the plethora of logging libraries available today and 2) processing challenges have emerged due to the large number of log processing tools that ingest logs and produce useful information from them. In this thesis, we explore these two challenges. We first explore the infrastructural challenges that arise due to the plethora of logging libraries available today. As systems evolve, their logging infrastructure has to evolve (commonly this is done by migrating to new logging libraries). We explore logging library migrations within Apache Software Foundation (ASF) projects. We i find that close to 14% of the pro jects within the ASF migrate their logging libraries at least once. For processing challenges, we explore the different factors which can affect the likelihood of a logging statement changing in the future in four open source systems namely ActiveMQ, Camel, Cloudstack and Liferay. Such changes are likely to negatively impact the log processing tools that must be updated to accommodate such changes. We find that 20%-45% of the logging statements within the four systems are changed at least once. We construct random forest classifiers and Cox models to determine the likelihood of both just-introduced and long-lived logging statements changing in the future. We find that file ownership, developer experience, log density and SLOC are important factors in determining the stability of logging statements.
Resumo:
Veterinary medicines (VMs) from agricultural industry can enter the environment in a number of ways. This includes direct exposure through aquaculture, accidental spillage and disposal, and indirect entry by leaching from manure or runoff after treatment. Many compounds used in animal treatments have ecotoxic properties that may have chronic or sometimes lethal effects when they come into contact with non-target organisms. VMs enter the environment in mixtures, potentially having additive effects. Traditional ecotoxicology tests are used to determine the lethal and sometimes reproductive effects on freshwater and terrestrial organisms. However, organisms used in ecotoxicology tests can be unrepresentative of the populations that are likely to be exposed to the compound in the environment. Most often the tests are on single compound toxicity but mixture effects may be significant and should be included in ecotoxicology testing. This work investigates the use, measured environmental concentrations (MECs) and potential impact of sea lice treatments on salmon farms in Scotland. Alternative methods for ecotoxicology testing including mixture toxicity, and the use of in silico techniques to predict the chronic impact of VMs on different species of aquatic organisms were also investigated. The Scottish Environmental Protection Agency (SEPA) provided information on the use of five sea lice treatments from 2008-2011 on Scottish salmon farms. This information was combined with the recently available data on sediment MECs for the years 2009-2012 provided by SEPA using ArcGIS 10.1. In depth analysis of this data showed that from a total of 55 sites, 30 sites had a MEC higher than the maximum allowable concentration (MAC) as set out by SEPA for emamectin benzoate and 7 sites had a higher MEC than MAC for teflubenzuron. A number of sites that were up to 16 km away from the nearest salmon farm reported as using either emamectin benzoate or teflubenzuron measured positive for the two treatments. There was no relationship between current direction and the distribution of the sea lice treatments, nor was there any evidence for alternative sources of the compounds e.g. land treatments. The sites that had MECs higher than the MAC could pose a risk to non-target organisms and disrupt the species dynamics of the area. There was evidence that some marine protected sites might be at risk of exposure to these compounds. To complement this work, effects on acute mixture toxicity of the 5 sea lice treatments, plus one major metabolite 3-phenoxybenzoic acid (3PBA), were measured using an assay using the bioluminescent bacteria Aliivibrio fischeri. When exposed to the 5 sea lice treatments and 3PBA A. fischeri showed a response to 3PBA, emamectin benzoate and azamethiphos as well as combinations of the three. In order to establish any additive effect of the sea lice treatments, the efficacy of two mixture prediction equations, concentration addition (CA) and independent action ii(IA) were tested using the results from single compound dose response curves. In this instance IA was the more effective prediction method with a linear regression confidence interval of 82.6% compared with 22.6% of CA. In silico molecular docking was carried out to predict the chronic effects of 15 VMs (including the five used as sea lice control). Molecular docking has been proposed as an alternative screening method for the chronic effects of large animal treatments on non-target organisms. Oestrogen receptor alpha (ERα) of 7 non-target bony fish and the African clawed frog Xenopus laevis were modelled using SwissModel. These models were then ‘docked’ to oestradiol, the synthetic oestrogen ethinylestradiol, two known xenoestrogens dichlorodiphenyltrichloroethane (DDT) and bisphenol A (BPA), the antioestrogen breast cancer treatment tamoxifen and 15 VMs using Auto Dock 4. Based on the results of this work, four VMs were identified as being possible xenoestrogens or anti-oestrogens; these were cypermethrin, deltamethrin, fenbendazole and teflubenzuron. Further investigation, using in vitro assays, into these four VMs has been suggested as future work. A modified recombinant yeast oestrogen screen (YES) was attempted using the cDNA of the ERα of the zebrafish Danio rerio and the rainbow trout Oncorhynchus mykiss. Due to time and difficulties in cloning protocols this work was unable to be completed. Use of such in vitro assays would allow for further investigation of the highlighted VMs into their oestrogenic potential. In conclusion, VMs used as sea lice treatments, such as teflubenzuron and emamectin benzoate may be more persistent and have a wider range in the environment than previously thought. Mixtures of sea lice treatments have been found to persist together in the environment, and effects of these mixtures on the bacteria A. fischeri can be predicted using the IA equation. Finally, molecular docking may be a suitable tool to predict chronic endocrine disrupting effects and identify varying degrees of impact on the ERα of nine species of aquatic organisms.
Resumo:
The importance of ion channels in the hallmarks of many cancers is increasingly recognised. This article reviews current knowledge of the expression of members of the voltage-gated calcium channel family (CaV) in cancer at the gene and protein level and discusses their potential functional roles. The ten members of the CaV channel family are classified according to expression of their pore-forming α-subunit; moreover, co-expression of accessory α2δ, β and γ confers a spectrum of biophysical characteristics including voltage dependence of activation and inactivation, current amplitude and activation/inactivation kinetics. CaV channels have traditionally been studied in excitable cells including neurones, smooth muscle, skeletal muscle and cardiac cells, and drugs targeting the channels are used in the treatment of hypertension and epilepsy. There is emerging evidence that several CaV channels are differentially expressed in cancer cells compared to their normal counterparts. Interestingly, a number of CaV channels also have non-canonical functions and are involved in transcriptional regulation of the expression of other proteins including potassium channels. Pharmacological studies show that CaV canonical function contributes to the fundamental biology of proliferation, cell-cycle progression and apoptosis. This raises the intriguing possibility that calcium channel blockers, approved for the treatment of other conditions, could be repurposed to treat particular cancers. Further research will reveal the full extent of both the canonical and non-canonical functions of CaV channels in cancer and whether calcium channel blockers are beneficial in cancer treatment.
Resumo:
The Family Model – A transgenerational approach to mental health in families This workshop will provide an overview on The Family Model (TFM) and its use in promoting and facilitating a transgenerational family focus in Mental Health services, over the past 10 - 15 years. Each of the speakers will address a different perspective, including service user/consumer, clinical practice, education & training, research and policy. Adrian Falkov (chair) will provide an overview of TFM to set the scene and a ‘policy to practice’ perspective, based on use of TFM in Australia. Author: Heide Lloyd. The Family Model A personal (consumer/patient) perspective | United Kingdom Heide will provide a description of her experiences as a child, adult, parent & grandparent, using TFM as the structure around which to ‘weave’ her story and demonstrate how TFM has assisted her in understanding the impact of symptoms on her & family and how she has used it in her management of symptoms and recovery (personal perspective). The Family Model Education & training perspective Marie Diggins | United Kingdom PhD Bente Weimand | Norway Authors: Marie Diggins | United Kingdom PhD Bente Weimand | Norway This combined (UK & Norwegian) presentation will cover historical background to TFM and its use in eLearning (the Social Care Institute for Excellence)and a number of other UK initiatives, together with a description of the postgraduate masters course at the University Oslo/Akershus, using TFM. The Family Model A research perspective PhD Anne Grant | Northern Ireland Author: PhD Anne Grant | Ireland Anne Grant will describe how she used TFM as the theoretical framework for her PhD looking at family focused (nursing) practice in Ireland. The Family Model A service systems perspective Mary Donaghy | Northern Ireland Authors: PhD Adrian Falkov | Australia Mary Donaghy | N Ireland Mary Donaghy will discuss how TFM has been used to support & facilitate a cross service ‘whole of system’ change program in Belfast (NI) to achieve improved family focused practice. She will demonstrate its utility in achieving a broader approach to service design, delivery and evaluation.
Resumo:
Cybercriminals ramp up their efforts with sophisticated techniques while defenders gradually update their typical security measures. Attackers often have a long-term interest in their targets. Due to a number of factors such as scale, architecture and nonproductive traffic however it makes difficult to detect them using typical intrusion detection techniques. Cyber early warning systems (CEWS) aim at alerting such attempts in their nascent stages using preliminary indicators. Design and implementation of such systems involves numerous research challenges such as generic set of indicators, intelligence gathering, uncertainty reasoning and information fusion. This paper discusses such challenges and presents the reader with compelling motivation. A carefully deployed empirical analysis using a real world attack scenario and a real network traffic capture is also presented.
Resumo:
This work studies the uplink of a cellular network with zero-forcing (ZF) receivers under imperfect channel state information at the base station. More specifically, apart from the pilot contamination, we investigate the effect of time variation of the channel due to the relative users' movement with regard to the base station. Our contributions include analytical expressions for the sum-rate with finite number of BS antennas, and also the asymptotic limits with infinite power and number of BS antennas, respectively. The numerical results provide interesting insights on how the user mobility degrades the system performance which extends previous results in the literature.
Resumo:
Large-scale multiple-input multiple-output (MIMO) communication systems can bring substantial improvement in spectral efficiency and/or energy efficiency, due to the excessive degrees-of-freedom and huge array gain. However, large-scale MIMO is expected to deploy lower-cost radio frequency (RF) components, which are particularly prone to hardware impairments. Unfortunately, compensation schemes are not able to remove the impact of hardware impairments completely, such that a certain amount of residual impairments always exists. In this paper, we investigate the impact of residual transmit RF impairments (RTRI) on the spectral and energy efficiency of training-based point-to-point large-scale MIMO systems, and seek to determine the optimal training length and number of antennas which maximize the energy efficiency. We derive deterministic equivalents of the signal-to-noise-and-interference ratio (SINR) with zero-forcing (ZF) receivers, as well as the corresponding spectral and energy efficiency, which are shown to be accurate even for small number of antennas. Through an iterative sequential optimization, we find that the optimal training length of systems with RTRI can be smaller compared to ideal hardware systems in the moderate SNR regime, while larger in the high SNR regime. Moreover, it is observed that RTRI can significantly decrease the optimal number of transmit and receive antennas.
Resumo:
Densification is a key to greater throughput in cellular networks. The full potential of coordinated multipoint (CoMP) can be realized by massive multiple-input multiple-output (MIMO) systems, where each base station (BS) has very many antennas. However, the improved throughput comes at the price of more infrastructure; hardware cost and circuit power consumption scale linearly/affinely with the number of antennas. In this paper, we show that one can make the circuit power increase with only the square root of the number of antennas by circuit-aware system design. To this end, we derive achievable user rates for a system model with hardware imperfections and show how the level of imperfections can be gradually increased while maintaining high throughput. The connection between this scaling law and the circuit power consumption is established for different circuits at the BS.
Resumo:
Much of the bridge stock on major transport links in North America and Europe was constructed in the 1950s and 1960s and has since deteriorated or is carrying loads far in excess of the original design loads. Structural Health Monitoring Systems (SHM) can provide valuable information on the bridge capacity but the application of such systems is currently limited by access and bridge type. This paper investigates the use of computer vision systems for SHM. A series of field tests have been carried out to test the accuracy of displacement measurements using contactless methods. A video image of each test was processed using a modified version of the optical flow tracking method to track displacement. These results have been validated with an established measurement method using linear variable differential transformers (LVDTs). The results obtained from the algorithm provided an accurate comparison with the validation measurements. The calculated displacements agree within 2% of the verified LVDT measurements, a number of post processing methods were then applied to attempt to reduce this error.