954 resultados para Chapter 7 Bankruptcy


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Introduction gives a brief resume' of the biologically important aspects of 5 -aminoimidazole -4 -carbozamide (1) and explores., in-depth, the synthetic routes to this imidazole. All documented reactions of 5 -aninoimidanole-4 -carboxamide are reviewed in detail, with particular emphasis on the preparation and subsequent coupling reactions of 5 –diazo-imidazole-4 -carboxamide (6). A series of thirteen novel amide 5-amino-2-arylazoimidazole-4-carboxamide derivatives (117-129) were prepared by the coupling of aryldiazonium salts with 5-aminoimidazole-4-carboxamide. Chemical modification of these azo-dyes resulted in the preparation of eight previously unknown acyl derivatives (136-143) Interaction of 5-amino-2-arylazoimidazole-4-carboxides with ethyl formate in sodium ethoxide effected pyrimidine ring closure to the novel 8-arylazohypoxanthines (144 and 145). Several reductive techniques were employed in an effort to obtain the elusive 2,5-diaminoimidazole-4-carboxamide (71),a candidate chemotherapeutic agent, from the arylazoiridazoles. No success can be reported although 5-amino-2-(3-aminoindazol-2-yl) imidazole-4-carboxamide (151) was isolated due to a partial reduction and intramolecular cyclisation of 5-amino72-(2-cyanaphenylazo)imidazole-4-carboxamide (122) .Further possible synthetic approaches to the diaminoimidazole are discussed in Chapter 4. An interesting degradation of a known unstable nitrohydrazone is described in Chapter 5.This resulted in formation of 1, 1-bis(pyrazol--3-ylazo)-1-nitroethane (164) instead of the expected cyclisation to a bicyclic tetrazine N-oxide. An improved preparation of 5-diazoinidazole-4-carboxamide has been achieved, and the diazo-azole formed cycloadducts with isocyanates to yield the hitherto unknown imidazo[5,1-d][1,2,3,5]tetrazin-7(6H)-ones. Eleven derivatives (167-177) of this new ring-system were prepared and characterised. Chemical and spectroscopic investigation showed this ring-system to be unstable under certain conditions, and a comparative study of stability within the group has been made. "Retro-cycloaddition" under protic and photolytic conditions was an unexpected property of 6-substituted imidazo[5,1-d][1,2,3,5]tetrazin--7(0)-ones.Selected examples of the imidazotetrazinone ring-system were tested for antitumour activity. The results of biological evaluation are given in Chapter 7, and have culminated in a Patent application by the collaborating body, May and Baker Ltd. One compound,3-carbamoyl-6-(2-chloro-ethyl)imidazo[5,1-d][1,2,3,5jtetrazin-7(6H)-one (175),shows striking anti-tumour activity in rodent test systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is concerned with Organisational Problem Solving. The work reflects the complexities of organisational problem situations and the eclectic approach that has been necessary to gain an understanding of the processes involved. The thesis is structured into three main parts. Part I describes the author's understanding of problems and suitable approaches. Chapter 2 identifies the Transcendental Realist (TR) view of science (Harre 1970, Bhaskar 1975) as the best general framework for identifying suitable approaches to complex organisational problems. Chapter 3 discusses the relationship between Checkland's methodology (1972) and TR. The need to generate iconic (explanatory) models of the problem situation is identified and the ability of viable system modelling to supplement the modelling stage of the methodology is explored in Chapter 4. Chapter 5 builds further on the methodology to produce an original iconic model of the methodological process. The model characterises the mechanisms of organisational problem situations as well as desirable procedural steps. The Weltanschauungen (W's) or "world views" of key actors is recognised as central to the mechanisms involved. Part II describes the experience which prompted the theoretical investigation. Chapter 6 describes the first year of the project. The success of this stage is attributed to the predominance of a single W. Chapter 7 describes the changes in the organisation which made the remaining phase of the project difficult. These difficulties are attributed to a failure to recognise the importance of differing W's. Part III revisits the theoretical and organisational issues. Chapter 8 identifies a range of techniques embodying W's which are compatible with .the framework of Part I and which might usefully supplement it. Chapter 9 characterises possible W's in the sponsoring organisation. Throughout the work, an attempt 1s made to reflect the process as well as the product of the author's leaving.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This investigation aimed to pinpoint the elements of motor timing control that are responsible for the increased variability commonly found in children with developmental dyslexia on paced or unpaced motor timing tasks (Chapter 3). Such temporal processing abilities are thought to be important for developing the appropriate phonological representations required for the development of literacy skills. Similar temporal processing difficulties arise in other developmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD). Motor timing behaviour in developmental populations was examined in the context of models of typical human timing behaviour, in particular the Wing-Kristofferson model, allowing estimation of the contribution of different timing control systems, namely timekeeper and implementation systems (Chapter 2 and Methods Chapters 4 and 5). Research examining timing in populations with dyslexia and ADHD has been inconsistent in the application of stimulus parameters and so the first investigation compared motor timing behaviour across different stimulus conditions (Chapter 6). The results question the suitability of visual timing tasks which produced greater performance variability than auditory or bimodal tasks. Following an examination of the validity of the Wing-Kristofferson model (Chapter 7) the model was applied to time series data from an auditory timing task completed by children with reading difficulties and matched control groups (Chapter 8). Expected group differences in timing performance were not found, however, associations between performance and measures of literacy and attention were present. Results also indicated that measures of attention and literacy dissociated in their relationships with components of timing, with literacy ability being correlated with timekeeper variance and attentional control with implementation variance. It is proposed that these timing deficits associated with reading difficulties are attributable to central timekeeping processes and so the contribution of error correction to timing performance was also investigated (Chapter 9). Children with lower scores on measures of literacy and attention were found to have a slower or failed correction response to phase errors in timing behaviour. Results from the series of studies suggest that the motor timing difficulty in poor reading children may stem from failures in the judgement of synchrony due to greater tolerance of uncertainty in the temporal processing system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research in the present thesis is focused on the norms, strategies,and approaches which translators employ when translating humour in Children's Literature from English into Greek. It is based on process-oriented descriptive translation studies, since the focus is on investigating the process of translation. Viewing translation as a cognitive process and a problem soling activity, this thesis Think-aloud protocols (TAPs) in order to investigate translator's minds. As it is not possible to directly observe the human mind at work, an attempt is made to ask the translators themselves to reveal their mental processes in real time by verbalising their thoughts while carrying out a translation task involving humour. In this study, thirty participants at three different levels of expertise in translation competence, i.e. tn beginner, ten competent, and ten experts translators, were requested to translate two humourous extracts from the fictional diary novel The Secret Diary of Adrian Mole, Aged 13 ¾ by Sue Townsend (1982) from English into Greek. As they translated, they were asked to verbalise their thoughts and reason them, whenever possible, so that their strategies and approaches could be detected, and that subsequently, the norms that govern these strategies and approaches could be revealed. The thesis consists of four parts: the introduction, the literature review, the study, and the conclusion, and is developed in eleven chapters. the introduction contextualises the study within translation studies (TS) and presents its rationale, research questions, aims, and significance. Chapters 1 to 7 present an extensive and inclusive literature review identifying the principles axioms that guide and inform the study. In these seven chapters the following areas are critically introduced: Children's literature (Chapter 1), Children's Literature Translation (Chapter 2), Norms in Children's Literature (Chapter 3), Strategies in Children's Literature (Chapter 4), Humour in Children's Literature Translation (Chapter 5), Development of Translation Competence (Chapter 6), and Translation Process Research (Chapter 7). In Chapters 8 - 11 the fieldwork is described in detail. the piolot and the man study are described with a reference to he environments and setting, the participants, the research -observer, the data and its analysis, and limitations of the study. The findings of the study are presented and analysed in Chapter 9. Three models are then suggested for systematising translators' norms, strategies, and approaches, thus, filling the existing gap in the field. Pedagogical norms (e.g. appropriateness/correctness, famililarity, simplicity, comprehensibility, and toning down), literary norms (e.g. sound of language and fluency). and source-text norms (e.g. equivalence) were revealed to b the most prominent general and specific norms governing the translators'  strategies and approaches in the process of translating humour in ChL. The data also revealed that monitoring and communication strategies (e.g. additions, omissions, and exoticism) were the prevalent strategies employed by translators. In Chapter 10 the main findings and outcomes of a potential secondary benefit (beneficial outcomes) are discussed on the basis of the research questions and aims of the study, and implications of the study are tackled in Chapter 11. In the conclusion, suggestions for future directions are given and final remarks noted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work described in this thesis revolves around the 1,1,n,ntetramethyl[n](2,11)teropyrenophanes, which are a series of [n]cyclophanes with a severely bent, board-shaped polynuclear aromatic hydrocarbons (PAH). The thesis is divided into seven Chapters. The first Chapter conatins an overview of the seminal work on [n]cyclophanes of the first two members of the “capped rylene” series of PAHs: benzene and pyrene. Three different general strategies for the synthesis of [n]cyclophanes are discussed and this leads in to a discussion of some slected syntheses of [n]paracyclopahnes and [n](2,7)pyrenophanes. The chemical, structural, spectroscopic and photophysical properties of these benzene and pyrene-derived cyclophanes are discussed with emphasis on the changes that occur with changes in the structure of the aromatic system. Chapter 1 concludes with a brief introduction to [n]cyclophanes of the fourth member of the capped rylene series of PAHs: teropyrene. The focus of the work described in Chapter 2 is the synthesis of of 1,1,n,ntetramethyl[n](2,11)teropyrenophane (n = 6 and 7) using a double-McMurry strategy. While the synthesis 1,1,7,7-tetramethyl[7](2,11)teropyrenophane was successful, the synthesis of the lower homologue 1,1,6,6-tetramethyl[6](2,11)teropyrenophane was not. The conformational behaviour of [n.2]pyrenophanes was also studied by 1H NMR spectroscopy and this provided a conformation-based rationale for the failure of the synthesis of 1,1,6,6-tetramethyl[6](2,11)teropyrenophane. Chapter 3 contains details of the synthesis of 1,1,n,n-tetramethyl[n](2,11)teropyrenophanes (n = 7-9) using a Wurtz / McMurry strategy, which proved to be more general than the double McMurry strategy. The three teropyrenophanes were obtained in ca. 10 milligram quantities. Trends in the spectroscopic properties that accompany changes in the structure of the teropyrene system are discussed. A violation of Kasha’s rule was observed when the teropyrenophanes were irradiated at 260 nm. The work described in the fourth Chapter concentrates on the development of gram-scale syntheses of 1,1,n,n-tetramethyl[n](2,11)teropyrenophanes (n = 7–10) using the Wurtz / McMurry strategy. Several major modifications to the orginal synthetic pathway had to be made to enable the first several steps to be performed comfortably on tens of grams of material. Solubility problems severely limited the amount of material that could be produced at a late stage of the synthetic pathways leading to the evennumbered members of the series (n = 8, 10). Ultimately, only 1,1,9,9- tetramethyl[9](2,11)teropyrenophane was synthesized on a multi-gram scale. In the final step in the synthesis, a valence isomerization / dehydrogenation (VID) reaction, the teropyrenophane was observed to become unstable under the conditions of its formation at n = 8. The synthesis of 1,1,10,10-tetramethyl[10](2,11)teropyrenophane was achieved for the first time, but only on a few hundred milligram scale. In Chapter 5, the results of an investigation of the electrophilic aromatic bromination of the 1,1,n,n-tetramethyl[n](2,11)teropyrenophanes (n = 7–10) are presented. Being the most abundant cyclophane, most of the work was performed on 1,1,9,9-tetramethyl[9](2,11)teropyrenophane. Reaction of this compound with varying amounts of of bromine revealed that bromination occurs most rapidly at the symmetryrelated 4, 9, 13 and 18 positions (teropyrene numbering) and that the 4,9,13,18- tetrabromide could be formed exclusively. Subsequent bromination occurs selectively on the symmetry-related 6, 7, 15 and 16 positions (teropyrene numbering), but considerably more slowly. Only mixtures of penta-, hexa-, hepta and octabromides could be formed. Bromination reactions of the higher and lower homologues (n = 7, 8 and 10) revealed that the reactivity of the teropyrene system increased with the degree of bend. Crystal structures of some tetra-, hexa-, hepta- and octa-brominated products were obtained. The goal of the work described in Chapter 6 is to use 1,1,9,9- tetramethyl[9](2,11)teropyrenophane as a starting material for the synthesis of warped nanographenophanes. A bromination, Suzuki-Miyaura, cyclodehydrogenation sequence was unsuccessful, as was a C–H arylation / cyclodehydrogenation approach. Itami’s recently-developed K-region-selective annulative -extension (APEX) reaction proved to be successful, affording a giant [n]cyclophane with a C84 PAH. Attempted bay-region Diels-Alder reactions and some cursory host-guest chemistry of teropyrenophanes are also discussed. In Chapter 7 a synthetic approach toward a planar model compound, 2,11-di-tbutylteropyrene, is described. The synthesis could not be completed owing to solubility problems at the end of the synthetic pathway.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As the world population continues to grow past seven billion people and global challenges continue to persist including resource availability, biodiversity loss, climate change and human well-being, a new science is required that can address the integrated nature of these challenges and the multiple scales on which they are manifest. Sustainability science has emerged to fill this role. In the fifteen years since it was first called for in the pages of Science, it has rapidly matured, however its place in the history of science and the way it is practiced today must be continually evaluated. In Part I, two chapters address this theoretical and practical grounding. Part II transitions to the applied practice of sustainability science in addressing the urban heat island (UHI) challenge wherein the climate of urban areas are warmer than their surrounding rural environs. The UHI has become increasingly important within the study of earth sciences given the increased focus on climate change and as the balance of humans now live in urban areas.

In Chapter 2 a novel contribution to the historical context of sustainability is argued. Sustainability as a concept characterizing the relationship between humans and nature emerged in the mid to late 20th century as a response to findings used to also characterize the Anthropocene. Emerging from the human-nature relationships that came before it, evidence is provided that suggests Sustainability was enabled by technology and a reorientation of world-view and is unique in its global boundary, systematic approach and ambition for both well being and the continued availability of resources and Earth system function. Sustainability is further an ambition that has wide appeal, making it one of the first normative concepts of the Anthropocene.

Despite its widespread emergence and adoption, sustainability science continues to suffer from definitional ambiguity within the academe. In Chapter 3, a review of efforts to provide direction and structure to the science reveals a continuum of approaches anchored at either end by differing visions of how the science interfaces with practice (solutions). At one end, basic science of societally defined problems informs decisions about possible solutions and their application. At the other end, applied research directly affects the options available to decision makers. While clear from the literature, survey data further suggests that the dichotomy does not appear to be as apparent in the minds of practitioners.

In Chapter 4, the UHI is first addressed at the synoptic, mesoscale. Urban climate is the most immediate manifestation of the warming global climate for the majority of people on earth. Nearly half of those people live in small to medium sized cities, an understudied scale in urban climate research. Widespread characterization would be useful to decision makers in planning and design. Using a multi-method approach, the mesoscale UHI in the study region is characterized and the secular trend over the last sixty years evaluated. Under isolated ideal conditions the findings indicate a UHI of 5.3 ± 0.97 °C to be present in the study area, the magnitude of which is growing over time.

Although urban heat islands (UHI) are well studied, there remain no panaceas for local scale mitigation and adaptation methods, therefore continued attention to characterization of the phenomenon in urban centers of different scales around the globe is required. In Chapter 5, a local scale analysis of the canopy layer and surface UHI in a medium sized city in North Carolina, USA is conducted using multiple methods including stationary urban sensors, mobile transects and remote sensing. Focusing on the ideal conditions for UHI development during an anticyclonic summer heat event, the study observes a range of UHI intensity depending on the method of observation: 8.7 °C from the stationary urban sensors; 6.9 °C from mobile transects; and, 2.2 °C from remote sensing. Additional attention is paid to the diurnal dynamics of the UHI and its correlation with vegetation indices, dewpoint and albedo. Evapotranspiration is shown to drive dynamics in the study region.

Finally, recognizing that a bridge must be established between the physical science community studying the Urban Heat Island (UHI) effect, and the planning community and decision makers implementing urban form and development policies, Chapter 6 evaluates multiple urban form characterization methods. Methods evaluated include local climate zones (LCZ), national land cover database (NCLD) classes and urban cluster analysis (UCA) to determine their utility in describing the distribution of the UHI based on three standard observation types 1) fixed urban temperature sensors, 2) mobile transects and, 3) remote sensing. Bivariate, regression and ANOVA tests are used to conduct the analyses. Findings indicate that the NLCD classes are best correlated to the UHI intensity and distribution in the study area. Further, while the UCA method is not useful directly, the variables included in the method are predictive based on regression analysis so the potential for better model design exists. Land cover variables including albedo, impervious surface fraction and pervious surface fraction are found to dominate the distribution of the UHI in the study area regardless of observation method.

Chapter 7 provides a summary of findings, and offers a brief analysis of their implications for both the scientific discourse generally, and the study area specifically. In general, the work undertaken does not achieve the full ambition of sustainability science, additional work is required to translate findings to practice and more fully evaluate adoption. The implications for planning and development in the local region are addressed in the context of a major light-rail infrastructure project including several systems level considerations like human health and development. Finally, several avenues for future work are outlined. Within the theoretical development of sustainability science, these pathways include more robust evaluations of the theoretical and actual practice. Within the UHI context, these include development of an integrated urban form characterization model, application of study methodology in other geographic areas and at different scales, and use of novel experimental methods including distributed sensor networks and citizen science.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study of III-nitride materials (InN, GaN and AlN) gained huge research momentum after breakthroughs in the production light emitting diodes (LEDs) and laser diodes (LDs) over the past two decades. Last year, the Nobel Prize in Physics was awarded jointly to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for inventing a new energy efficient and environmental friendly light source: blue light-emitting diode (LED) from III-nitride semiconductors in the early 1990s. Nowadays, III-nitride materials not only play an increasingly important role in the lighting technology, but also become prospective candidates in other areas, for example, the high frequency (RF) high electron mobility transistor (HEMT) and photovoltaics. These devices require the growth of high quality III-nitride films, which can be prepared using metal organic vapour phase epitaxy (MOVPE). The main aim of my thesis is to study and develop the growth of III-nitride films, including AlN, u-AlGaN, Si-doped AlGaN, and InAlN, serving as sample wafers for fabrication of ultraviolet (UV) LEDs, in order to replace the conventional bulky, expensive and environmentally harmful mercury lamp as new UV light sources. For application to UV LEDs, reducing the threading dislocation density (TDD) in AlN epilayers on sapphire substrates is a key parameter for achieving high-efficiency AlGaNbased UV emitters. In Chapter 4, after careful and systematic optimisation, a working set of conditions, the screw and edge type dislocation density in the AlN were reduced to around 2.2×108 cm-2 and 1.3×109 cm-2 , respectively, using an optimized three-step process, as estimated by TEM. An atomically smooth surface with an RMS roughness of around 0.3 nm achieved over 5×5 µm 2 AFM scale. Furthermore, the motion of the steps in a one dimension model has been proposed to describe surface morphology evolution, especially the step bunching feature found under non-optimal conditions. In Chapter 5, control of alloy composition and the maintenance of compositional uniformity across a growing epilayer surface were demonstrated for the development of u-AlGaN epilayers. Optimized conditions (i.e. a high growth temperature of 1245 °C) produced uniform and smooth film with a low RMS roughness of around 2 nm achieved in 20×20 µm 2 AFM scan. The dopant that is most commonly used to obtain n-type conductivity in AlxGa1-xN is Si. However, the incorporation of Si has been found to increase the strain relaxation and promote unintentional incorporation of other impurities (O and C) during Si-doped AlGaN growth. In Chapter 6, reducing edge-type TDs is observed to be an effective appoach to improve the electric and optical properties of Si-doped AlGaN epilayers. In addition, the maximum electron concentration of 1.3×1019 cm-3 and 6.4×1018 cm-3 were achieved in Si-doped Al0.48Ga0.52N and Al0.6Ga0.4N epilayers as measured using Hall effect. Finally, in Chapter 7, studies on the growth of InAlN/AlGaN multiple quantum well (MQW) structures were performed, and exposing InAlN QW to a higher temperature during the ramp to the growth temperature of AlGaN barrier (around 1100 °C) will suffer a significant indium (In) desorption. To overcome this issue, quasi-two-tempeature (Q2T) technique was applied to protect InAlN QW. After optimization, an intense UV emission from MQWs has been observed in the UV spectral range from 320 to 350 nm measured by room temperature photoluminescence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aquaculture is a fast-growing industry contributing to global food security and sustainable aquaculture, which may reduce pressures on capture fisheries. The overall objective of this thesis was to look at the immunostimulatory effects of different aspects of aquaculture on the host response of the edible sea urchin, Paracentrotus lividus, which are a prized delicacy (roe) in many Asian and Mediterranean countries. In Chapter 1, the importance of understanding the biology, ecology, and physiology of P. lividus, as well as the current status in the culture of this organism for mass production and introducing the thesis objectives for following chapters is discussed. As the research commenced, the difficulties of identifying individuals for repeat sampling became clear; therefore, Chapter 2 was a tagging experiment that indicated PIT tagging was a successful way of identifying individual sea urchins over time with a high tag retention rate. However, it was also found that repeat sampling via syringe to measure host response of an individual caused stress which masked results and thus animals would be sampled and sacrificed going forward. Additionally, from personal observations and discussion with peers, it was suggested to look at the effect that diet has on sea urchin immune function and the parameters I measured which led to Chapter 3. In this chapter, both Laminaria digitata and Mytilus edulis were shown to influence measured immune parameters of differential cell counts, nitric oxide production, and lysozyme activity. Therefore, trials commencing after Trial 5 in Chapter 4, were modified to include starvation in order to remove any effect of diet. Another important aspect of culturing any organism is the study of their immune function and its response to several immunostimulatory agents (Chapter 4). Zymosan A was shown to be an effective immunostimulatory agent in P. lividus. Further work on handled/stored animals (Chapter 5) showed Zymosan A reduced the measured levels of some immune parameters measured relative to the control, which may reduce the amount of stress in the animals. In Chapter 6, animals were infected with Vibrio anguillarum and, although V. anguillarum, impacted immune parameters of P. lividus, it did not cause mortality as predicted. Lastly, throughout this thesis work, it was noted that the immune parameters measured produced different values at different times of the year (Chapter 7); therefore, using collated baseline (control) data, results were compiled to observe seasonal effects. It was determined that both seasonality and sourcing sites influenced immune parameter measurements taken at different times throughout the year. In conclusion, this thesis work fits into the framework of development of aquaculture practices that affect immune function of the host and future research focusing on the edible sea urchin, P. lividus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this study is to examine the effects of agglomeration economies on the productivity of manufacturing local units in Ireland. Four types of agglomeration economies are considered in this study. These are internal economies of scale, localization economies, related variety and urbanization economies. This study makes a number of contributions to the literature. Firstly, this is the first study to conduct an investigation of the effects of agglomeration economies on the productivity of manufacturing local units operating in Ireland. Secondly, this study distinguishes between indigenous and foreign-owned local units which is important given the dual nature of the Irish economy (Krugman, 1997). Thirdly, in addition to considering the effects of agglomeration economies, this study examines the impact of spurious agglomeration on the productivity of foreign-owned local units. Using data from the Census of Industrial Local Units and a series of IV GMM estimators to control for endogeneity, the results of the analysis conducted in Chapter 6 reveal that there are differences in the effects of agglomeration economies on the productivity of indigenous and foreign-owned local units. In Chapter 7 the Census of Industrial Local Units is supplemented by additional data sources and more in-depth measures are generated to capture the features of each of the external agglomeration economies considered in this analysis. There is some evidence to suggest that the availability of local inputs has a negative and significant impact on productivity. The NACE based measures of related variety reveal that the availability of local inputs and knowledge spillovers for related sectors have a negative and significant impact on productivity. There is clear evidence to suggest that urbanization economies are important for increasing the productivity of indigenous local units. The findings reveal that a 1% increase in population density in the NUTS 3 region leads to an increase in the productivity of indigenous local units of approximately 0.07% to 0.08%. The results also reveal that there is a significant difference in the effects of agglomeration economies on the productivity of low-tech and medium/high-tech indigenous local units. The more in-depth measures of agglomeration economies used in Chapter 7 are also used in Chapter 8. A series of IV GMM regressions are estimated in order to identify the impact of agglomeration economies and spurious agglomeration on the productivity of foreign-owned local units operating in Ireland. There is some evidence found to suggest that the availability of a pool of skilled labour has a positive and significant on productivity of foreign-owned local units. There is also evidence to suggest that localization knowledge spillovers have a negative impact on the productivity of foreign-owned local units. There is strong evidence to suggest that the availability of local inputs has a negative impact on the productivity. The negative impact is not confined to the NACE 4-digit sector but also extends into related sectors as determined by Porter’s (2003) cluster classification. The cluster based skills measure of related variety has a positive and significant impact on the productivity of foreign-owned local units. Similar to Chapter 7, there is clear evidence to suggest that urbanization economies are important for increasing the productivity of foreign-owned local units. Both the summary measure and each of the more in-depth measures of agglomeration economies have a positive and significant impact on productivity. Spurious agglomeration has a positive and significant impact on the productivity of foreign-owned local units. The results indicate that the more foreign-owned local units of the same nationality in the country the greater the levels of productivity for the local unit. From a policy perspective, urbanization economies are clearly important for increasing the productivity of both indigenous and foreign-owned local units. Furthermore, the availability of a pool of skilled labour appears to be important for increasing the productivity of foreign-owned local units. Another policy implication that arises from these results relates to the differences observed between indigenous local units and foreign-owned local units and also between low-tech and medium/high-tech indigenous local units. These findings indicate that ‘one-size-fits-all’ type policies are not appropriate for increasing the productivity of local units operating in Ireland. Policies should be tailored to the needs of either indigenous or foreign-owned local units and also to specific sectors. This positive finding for own country spurious agglomeration is important from a policy perspective and is one that IDA Ireland should take on board.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several landforms found in the fold-and-thrust belt area of Central Precordillera, Pre-Andes of Argentina, which were often associated with tectonic efforts, are in fact related to non-tectonic processes or gravitational superficial structures. These second-order structures, interpreted as gravitational collapse structures, have developed in the western flank of sierras de La Dehesa and Talacasto. These include rock-slides, rock falls, wrinkle folds, slip sheets and flaps, among others; which together constitute a monoclinal fold dipping between 30º and 60º to the west. Gravity collapse structures are parallel to the regional strike of the Sierra de la Dehesa and are placed in Ordovician limestones and dolomites. Their sloping towards the west, the presence of bed planes, fractures and joints; and the lithology (limestone interbedded with incompetent argillaceous banks) would have favored their occurrence. Movement of the detached structures has been controlled by lithology characteristics, as well as by bedding and joints. Detachment and initial transport of gravity collapse structures and rockslides in the western flank of the Sierra de la Dehesa were tightly controlled by three structural elements: 1) sliding surfaces developed on parallel bedded strata when dipping >30° in the slope direction; 2) Joint’s sets constitute lateral and transverse traction cracks which release extensional stresses and 3) Discontinuities fragmenting sliding surfaces.  Some other factors that could be characterized as local (lithology, structure and topography) and as regional (high seismic activity and possibly wetter conditions during the postglacial period) were determining in favoring the steady loss of the western mountain side in the easternmost foothills of Central Precordillera.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the book ’Quadratic algebras’ by Polishchuk and Positselski [23] algebras with a small number of generators (n = 2, 3) are considered. For some number r of relations possible Hilbert series are listed, and those appearing as series of Koszul algebras are specified. The first case, where it was not possible to do, namely the case of three generators n = 3 and six relations r = 6 is formulated as an open problem. We give here a complete answer to this question, namely for quadratic algebras with dimA_1 = dimA_2 = 3, we list all possible Hilbert series, and find out which of them can come from Koszul algebras, and which can not. As a consequence of this classification, we found an algebra, which serves as a counterexample to another problem from the same book [23] (Chapter 7, Sec. 1, Conjecture 2), saying that Koszul algebra of finite global homological dimension d has dimA_1 > d. Namely, the 3-generated algebra A given by relations xx + yx = xz = zy = 0 is Koszul and its Koszul dual algebra A^! has Hilbert series of degree 4: HA! (t) = 1 + 3t + 3t^2 + 2t^3 + t^4, hence A has global homological dimension 4.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hybrid simulation is a technique that combines experimental and numerical testing and has been used for the last decades in the fields of aerospace, civil and mechanical engineering. During this time, most of the research has focused on developing algorithms and the necessary technology, including but not limited to, error minimisation techniques, phase lag compensation and faster hydraulic cylinders. However, one of the main shortcomings in hybrid simulation that has pre- vented its widespread use is the size of the numerical models and the effect that higher frequencies may have on the stability and accuracy of the simulation. The first chapter in this document provides an overview of the hybrid simulation method and the different hybrid simulation schemes, and the corresponding time integration algorithms, that are more commonly used in this field. The scope of this thesis is presented in more detail in chapter 2: a substructure algorithm, the Substep Force Feedback (Subfeed), is adapted in order to fulfil the necessary requirements in terms of speed. The effects of more complex models on the Subfeed are also studied in detail, and the improvements made are validated experimentally. Chapters 3 and 4 detail the methodologies that have been used in order to accomplish the objectives mentioned in the previous lines, listing the different cases of study and detailing the hardware and software used to experimentally validate them. The third chapter contains a brief introduction to a project, the DFG Subshake, whose data have been used as a starting point for the developments that are shown later in this thesis. The results obtained are presented in chapters 5 and 6, with the first of them focusing on purely numerical simulations while the second of them is more oriented towards a more practical application including experimental real-time hybrid simulation tests with large numerical models. Following the discussion of the developments in this thesis is a list of hardware and software requirements that have to be met in order to apply the methods described in this document, and they can be found in chapter 7. The last chapter, chapter 8, of this thesis focuses on conclusions and achievements extracted from the results, namely: the adaptation of the hybrid simulation algorithm Subfeed to be used in conjunction with large numerical models, the study of the effect of high frequencies on the substructure algorithm and experimental real-time hybrid simulation tests with vibrating subsystems using large numerical models and shake tables. A brief discussion of possible future research activities can be found in the concluding chapter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En 2015, il y aurait au Québec plus de 5 000 médecins diplômés à l’étranger, dont près de 2 500 travaillent comme médecins et possiblement autant qui ont emprunté d’autres voies professionnelles, momentanément ou durablement. Les migrants très qualifiés sont réputés faire face à de multiples barrières sur le marché du travail, particulièrement ceux membres de professions réglementées. Le cas des médecins est exemplaire compte tenu de sa complexité et de la multiplicité des acteurs impliqués au cours du processus de reconnaissance professionnelle. Ayant comme principal objectif de documenter les trajectoires d’intégration professionnelle de diplômés internationaux en médecine (DIM) et leurs expériences sur le marché du travail québécois, cette thèse s’attache à comprendre ce qui pourrait distinguer les trajectoires d’intégration en emploi pour un même groupe professionnel. En observant notamment les stratégies d’intégration et les ressources mobilisées, nous cherchons à mieux saisir les parcours des DIM qui se requalifient et qui exercent au Québec et ceux qui se réorientent vers d’autres secteurs d’activités. La démarche méthodologique est qualitative (terrain 2009 à 2012), le cœur des analyses étant basé sur 31 récits de vie professionnelle de DIM ayant migré au Québec principalement dans les années 2000. Les données secondaires incluent 22 entretiens non dirigés auprès d’acteurs clés de milieux institutionnels, communautaires ou associatifs ainsi qu’auprès de DIM très récemment immigrés ou ayant le projet d’immigrer. S’y ajoute l’observation ethnographique ponctuelle, telle que des activités associatives. La forme retenue pour cette thèse en est une par articles. Le fil directeur est l’exploration de l’interface entre les politiques, les pratiques et les individus au cœur des trajectoires d’intégration professionnelle. Les trois articles (chapitres 4 à 6) visent des focales complémentaires avec le même objectif : l’exploration de la complexité des trajectoires d’intégration professionnelle et la dialectique entre les niveaux micro, méso et macrosociaux. Ces derniers renvoient respectivement à la puissance d’agir des individus et leurs contraintes d’action, les relations sociales, les institutions et les pratiques organisationnelles et plus largement les structures sociopolitiques. Les résultats de cette thèse mettent en lumière des aspects complémentaires de l’intégration professionnelle et en interaction dynamique : 1) dimension macrosociale et politique; 2) dimensions institutionnelles et relations sociales; 3) identité professionnelle. Suite à l’introduction, la problématique (chap. 1) et la méthodologie (chap.2), le chapitre 3 expose les types des trajectoires d’intégration des DIM, leur hétérogénéité, et met en relief leurs récits de vie professionnelle. Le chapitre 4 soulève le paradoxe entre les politiques d’attraction de l’immigration déployés par les gouvernements canadien et québécois et les mécanismes de régulation opérant sur le marché du travail. Le chapitre 5 explore les stratégies et ressources mobilisées par les DIM et met en lumière l’effet positif des ressources symboliques. Les ressources institutionnelles de soutien, quoique élémentaires dans le processus de reconnaissance professionnelle, ne sont subjectivement pas considérées comme un élément central. Ce sont plutôt les ressources informelles qui jouent ce rôle d’appui significatif, en particulier les pairs DIM. Le chapitre 6 adopte une perspective microsociale et explore le caractère dynamique et relationnel de l’identité professionnelle, mais surtout, la puissance des conditions d’appartenance qui obligent à une flexibilité professionnelle et parfois au retrait de la profession ou du pays. Le chapitre 7 discute au plan théorique de l’intérêt d’une combinaison d’échelles analytiques et d’une ouverture disciplinaire afin de souligner les tensions et angles morts en ce qui concerne les mobilités de professionnels de la santé et leur intégration professionnelle. Cette thèse explore l’interrelation complexe entre les ressources économiques, sociales et symboliques, dans un contexte de fragmentation des ressources institutionnelles et de corporatisme.