924 resultados para Hutchby, Ian: Conversation analysis. Principles, practices and application
Resumo:
The central objective of this work was to generate weakly coordinating cations of unprecedented molecular size providing an inherently stable hydrophobic shell around a central charge. It was hypothesized that divergent dendritic growth by means of thermal [4+2] Diels-Alder cycloaddition might represent a feasible synthetic method to circumvent steric constraints and enable a drastic increase in cation size.rnThis initial proposition could be verified: applying the divergent dendrimer synthesis to an ethynyl-functionalized tetraphenylphosphonium derivative afforded monodisperse cations with precisely nanoscopic dimensions for the first time. Furthermore, the versatile nature of the applied cascade reactions enabled a throughout flexible design and structural tuning of the desired target cations. The specific surface functionalization as well as the implementation of triazolyl-moieties within the dendrimer scaffold could be addressed by sophisticated variation of the employed building block units (see chapter 3). rnDue to the steric screening provided by their large, hydrophobic and shape-persistent polyphenylene shells, rigidly dendronized cations proved more weakly coordinating compared to their non-dendronized analogues. This hypothesis has been experimentally confirmed by means of dielectric spectroscopy (see chapter 4). It was demonstrated for a series of dendronized borate salts that the degree of ion dissociation increased with the size of the cations. The utilization of the very large phosphonium cations developed within this work almost achieved to separate the charge carriers about the Bjerrum length in solvents of low polarity, which was reflected by approaching near quantitative ion dissociation even at room temperature. In addition to effect the electrolyte behavior in solution, the steric enlargement of ions could be visualized by means of several crystal structure analyses. Thus an insight into lattice packing under the effect of extraordinary large cations could be gathered. rnAn essential theme of this work focused on the application of benzylphosphonium salts in the classical Wittig reaction, where the concept of dendronization served as synthetic means to introduce an exceptionally large polyphenylene substituent at the -position. The straightforward influence of this unprecedented bulky group on the Wittig stereochemistry was investigated by NMR-analysis of the resulting alkenes. Based on the obtained data a valuable explanation for the origin of the observed selectivity was brought in line with the up-to-date operating [2+2] cycloaddition mechanism. Furthermore, a reliable synthesis protocol for unsymmetrically substituted polyphenylene alkenes and stilbenes was established by the design of custom-built polyphenylene precursors (see chapter 5).rnFinally, fundamental experiments to functionalize a polymer chain with sterically shielded ionic groups either in the pending or internal position were outlined within this work. Thus, inherently hydrophobic polysalts shall be formed so that future research can invesigate their physical properties with regard to counter ion condensation and charge carrier mobility.rnIn summary, this work demonstrates how the principles of dendrimer chemistry can be applied to modify and specifically tailor the properties of salts. The numerously synthesized dendrimer-ions shown herein represent a versatile interface between classic organic and inorganic electrolytes, and defined macromolecular structures in the nanometer-scale. Furthermore the particular value of polyphenylene dendrimers in terms of a broad applicability was illustrated. This work accomplished in an interdisciplinary manner to give answer to various questions such as structural modification of ions, the resulting influence on the electrolyte behavior, as well as the stereochemical control of organic syntheses via polyphenylene phosphonium salts. rn
Resumo:
Addressing current limitations of state-of-the-art instrumentation in aerosol research, the aim of this work was to explore and assess the applicability of a novel soft ionization technique, namely flowing atmospheric-pressure afterglow (FAPA), for the mass spectrometric analysis of airborne particulate organic matter. Among other soft ionization methods, the FAPA ionization technique was developed in the last decade during the advent of ambient desorption/ionization mass spectrometry (ADI–MS). Based on a helium glow discharge plasma at atmospheric-pressure, excited helium species and primary reagent ions are generated which exit the discharge region through a capillary electrode, forming the so-called afterglow region where desorption and ionization of the analytes occurs. Commonly, fragmentation of the analytes during ionization is reported to occur only to a minimum extent, predominantly resulting in the formation of quasimolecular ions, i.e. [M+H]+ and [M–H]– in the positive and the negative ion mode, respectively. Thus, identification and detection of signals and their corresponding compounds is facilitated in the acquired mass spectra. The focus of the first part of this study lies on the application, characterization and assessment of FAPA–MS in the offline mode, i.e. desorption and ionization of the analytes from surfaces. Experiments in both positive and negative ion mode revealed ionization patterns for a variety of compound classes comprising alkanes, alcohols, aldehydes, ketones, carboxylic acids, organic peroxides, and alkaloids. Besides the always emphasized detection of quasimolecular ions, a broad range of signals for adducts and losses was found. Additionally, the capabilities and limitations of the technique were studied in three proof-of-principle applications. In general, the method showed to be best suited for polar analytes with high volatilities and low molecular weights, ideally containing nitrogen- and/or oxygen functionalities. However, for compounds with low vapor pressures, containing long carbon chains and/or high molecular weights, desorption and ionization is in direct competition with oxidation of the analytes, leading to the formation of adducts and oxidation products which impede a clear signal assignment in the acquired mass spectra. Nonetheless, FAPA–MS showed to be capable of detecting and identifying common limonene oxidation products in secondary OA (SOA) particles on a filter sample and, thus, is considered a suitable method for offline analysis of OA particles. In the second as well as the subsequent parts, FAPA–MS was applied online, i.e. for real time analysis of OA particles suspended in air. Therefore, the acronym AeroFAPA–MS (i.e. Aerosol FAPA–MS) was chosen to refer to this method. After optimization and characterization, the method was used to measure a range of model compounds and to evaluate typical ionization patterns in the positive and the negative ion mode. In addition, results from laboratory studies as well as from a field campaign in Central Europe (F–BEACh 2014) are presented and discussed. During the F–BEACh campaign AeroFAPA–MS was used in combination with complementary MS techniques, giving a comprehensive characterization of the sampled OA particles. For example, several common SOA marker compounds were identified in real time by MSn experiments, indicating that photochemically aged SOA particles were present during the campaign period. Moreover, AeroFAPA–MS was capable of detecting highly oxidized sulfur-containing compounds in the particle phase, presenting the first real-time measurements of this compound class. Further comparisons with data from other aerosol and gas-phase measurements suggest that both particulate sulfate as well as highly oxidized peroxyradicals in the gas phase might play a role during formation of these species. Besides applying AeroFAPA–MS for the analysis of aerosol particles, desorption processes of particles in the afterglow region were investigated in order to gain a more detailed understanding of the method. While during the previous measurements aerosol particles were pre-evaporated prior to AeroFAPA–MS analysis, in this part no external heat source was applied. Particle size distribution measurements before and after the AeroFAPA source revealed that only an interfacial layer of OA particles is desorbed and, thus, chemically characterized. For particles with initial diameters of 112 nm, desorption radii of 2.5–36.6 nm were found at discharge currents of 15–55 mA from these measurements. In addition, the method was applied for the analysis of laboratory-generated core-shell particles in a proof-of-principle study. As expected, predominantly compounds residing in the shell of the particles were desorbed and ionized with increasing probing depths, suggesting that AeroFAPA–MS might represent a promising technique for depth profiling of OA particles in future studies.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
BACKGROUND Prophylactic measures are key components of dairy herd mastitis control programs, but some are only relevant in specific housing systems. To assess the association between management practices and mastitis incidence, data collected in 2011 by a survey among 979 randomly selected Swiss dairy farms, and information from the regular test day recordings from 680 of these farms was analyzed. RESULTS The median incidence of farmer-reported clinical mastitis (ICM) was 11.6 (mean 14.7) cases per 100 cows per year. The median annual proportion of milk samples with a composite somatic cell count (PSCC) above 200,000 cells/ml was 16.1 (mean 17.3) %. A multivariable negative binomial regression model was fitted for each of the mastitis indicators for farms with tie-stall and free-stall housing systems separately to study the effect of other (than housing system) management practices on the ICM and PSCC events (above 200,000 cells/ml). The results differed substantially by housing system and outcome. In tie-stall systems, clinical mastitis incidence was mainly affected by region (mountainous production zone; incidence rate ratio (IRR) = 0.73), the dairy herd replacement system (1.27) and farmers age (0.81). The proportion of high SCC was mainly associated with dry cow udder controls (IRR = 0.67), clean bedding material at calving (IRR = 1.72), using total merit values to select bulls (IRR = 1.57) and body condition scoring (IRR = 0.74). In free-stall systems, the IRR for clinical mastitis was mainly associated with stall climate/temperature (IRR = 1.65), comfort mats as resting surface (IRR = 0.75) and when no feed analysis was carried out (IRR = 1.18). The proportion of high SSC was only associated with hand and arm cleaning after calving (IRR = 0.81) and beef producing value to select bulls (IRR = 0.66). CONCLUSIONS There were substantial differences in identified risk factors in the four models. Some of the factors were in agreement with the reported literature while others were not. This highlights the multifactorial nature of the disease and the differences in the risks for both mastitis manifestations. Attempting to understand these multifactorial associations for mastitis within larger management groups continues to play an important role in mastitis control programs.
Resumo:
Family preservation service agencies in the State of Kansas have undergone major changes since the implementation of a managed care model of service delivery in 1996. This qualitative study examines the successes and barriers experienced by agency directors in utilization of a managed care system. Outcome/ performance measures utilized by the State of Kansas are reviewed, and contributing factors to the successes and limitations of the program are discussed. Included in these reviews is an analysis and presentation of literature and research which has been used as support for the current program structure. Recommendations for further evolution of practice are proposed.
Resumo:
Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.
Resumo:
Joshua Van Oven
Resumo:
This study aims to examine the international value distribution structure among major East Asian economies and the US. The mainstream trade theory explains the gains from trade; however, global value chain (GVC) approach emphasises uneven benefits of globalization among trading partners. The present study is mainly based on this view, examining which economy gains the most and which the least from the East Asian production networks. Two key industries, i.e., electronics and automobile, are our principle focus. Input-output method is employed to trace the creation and flows of value-added within the region. A striking fact is that some ASEAN economies increasingly reduce their shares of value-added, taken by developed countries, particularly by Japan. Policy implications are discussed in the final section.
Resumo:
The combination of minimum time control and multiphase converter is a favorable option for dc-dc converters in applications where output voltage variation is required, such as RF amplifiers and dynamic voltage scaling in microprocessors, due to their advantage of fast dynamic response. In this paper, an improved minimum time control approach for multiphase buck converter that is based on charge balance technique, aiming at fast output voltage transition is presented. Compared with the traditional method, the proposed control takes into account the phase delay and current ripple in each phase. Therefore, by investigating the behavior of multiphase converter during voltage transition, it resolves the problem of current unbalance after the transient, which can lead to long settling time of the output voltage. The restriction of this control is that the output voltage that the converter can provide is related to the number of the phases, because only the duty cycles at which the multiphase converter has total ripple cancellation are used in this approach. The model of the proposed control is introduced, and the design constraints of the buck converters filter for this control are discussed. In order to prove the concept, a four-phase buck converter is implemented and the experimental results that validate the proposed control method are presented. The application of this control to RF envelope tracking is also presented in this paper.
Resumo:
There is a growing call for inventories that evaluate geographic patterns in diversity of plant genetic resources maintained on farm and in species' natural populations in order to enhance their use and conservation. Such evaluations are relevant for useful tropical and subtropical tree species, as many of these species are still undomesticated, or in incipient stages of domestication and local populations can offer yet-unknown traits of high value to further domestication. For many outcrossing species, such as most trees, inbreeding depression can be an issue, and genetic diversity is important to sustain local production. Diversity is also crucial for species to adapt to environmental changes. This paper explores the possibilities of incorporating molecular marker data into Geographic Information Systems (GIS) to allow visualization and better understanding of spatial patterns of genetic diversity as a key input to optimize conservation and use of plant genetic resources, based on a case study of cherimoya (Annona cherimola Mill.), a Neotropical fruit tree species. We present spatial analyses to (1) improve the understanding of spatial distribution of genetic diversity of cherimoya natural stands and cultivated trees in Ecuador, Bolivia and Peru based on microsatellite molecular markers (SSRs); and (2) formulate optimal conservation strategies by revealing priority areas for in situ conservation, and identifying existing diversity gaps in ex situ collections. We found high levels of allelic richness, locally common alleles and expected heterozygosity in cherimoya's putative centre of origin, southern Ecuador and northern Peru, whereas levels of diversity in southern Peru and especially in Bolivia were significantly lower. The application of GIS on a large microsatellite dataset allows a more detailed prioritization of areas for in situ conservation and targeted collection across the Andean distribution range of cherimoya than previous studies could do, i.e. at province and department level in Ecuador and Peru, respectively.
Resumo:
El actual contexto de fabricación, con incrementos en los precios de la energía, una creciente preocupación medioambiental y cambios continuos en los comportamientos de los consumidores, fomenta que los responsables prioricen la fabricación respetuosa con el medioambiente. El paradigma del Internet de las Cosas (IoT) promete incrementar la visibilidad y la atención prestada al consumo de energía gracias tanto a sensores como a medidores inteligentes en los niveles de máquina y de línea de producción. En consecuencia es posible y sencillo obtener datos de consumo de energía en tiempo real proveniente de los procesos de fabricación, pero además es posible analizarlos para incrementar su importancia en la toma de decisiones. Esta tesis pretende investigar cómo utilizar la adopción del Internet de las Cosas en el nivel de planta de producción, en procesos discretos, para incrementar la capacidad de uso de la información proveniente tanto de la energía como de la eficiencia energética. Para alcanzar este objetivo general, la investigación se ha dividido en cuatro sub-objetivos y la misma se ha desarrollado a lo largo de cuatro fases principales (en adelante estudios). El primer estudio de esta tesis, que se apoya sobre una revisión bibliográfica comprehensiva y sobre las aportaciones de expertos, define prácticas de gestión de la producción que son energéticamente eficientes y que se apoyan de un modo preeminente en la tecnología IoT. Este primer estudio también detalla los beneficios esperables al adoptar estas prácticas de gestión. Además, propugna un marco de referencia para permitir la integración de los datos que sobre el consumo energético se obtienen en el marco de las plataformas y sistemas de información de la compañía. Esto se lleva a cabo con el objetivo último de remarcar cómo estos datos pueden ser utilizados para apalancar decisiones en los niveles de procesos tanto tácticos como operativos. Segundo, considerando los precios de la energía como variables en el mercado intradiario y la disponibilidad de información detallada sobre el estado de las máquinas desde el punto de vista de consumo energético, el segundo estudio propone un modelo matemático para minimizar los costes del consumo de energía para la programación de asignaciones de una única máquina que deba atender a varios procesos de producción. Este modelo permite la toma de decisiones en el nivel de máquina para determinar los instantes de lanzamiento de cada trabajo de producción, los tiempos muertos, cuándo la máquina debe ser puesta en un estado de apagada, el momento adecuado para rearrancar, y para pararse, etc. Así, este modelo habilita al responsable de producción de implementar el esquema de producción menos costoso para cada turno de producción. En el tercer estudio esta investigación proporciona una metodología para ayudar a los responsables a implementar IoT en el nivel de los sistemas productivos. Se incluye un análisis del estado en que se encuentran los sistemas de gestión de energía y de producción en la factoría, así como también se proporcionan recomendaciones sobre procedimientos para implementar IoT para capturar y analizar los datos de consumo. Esta metodología ha sido validada en un estudio piloto, donde algunos indicadores clave de rendimiento (KPIs) han sido empleados para determinar la eficiencia energética. En el cuarto estudio el objetivo es introducir una vía para obtener visibilidad y relevancia a diferentes niveles de la energía consumida en los procesos de producción. El método propuesto permite que las factorías con procesos de producción discretos puedan determinar la energía consumida, el CO2 emitido o el coste de la energía consumida ya sea en cualquiera de los niveles: operación, producto o la orden de fabricación completa, siempre considerando las diferentes fuentes de energía y las fluctuaciones en los precios de la misma. Los resultados muestran que decisiones y prácticas de gestión para conseguir sistemas de producción energéticamente eficientes son posibles en virtud del Internet de las Cosas. También, con los resultados de esta tesis los responsables de la gestión energética en las compañías pueden plantearse una aproximación a la utilización del IoT desde un punto de vista de la obtención de beneficios, abordando aquellas prácticas de gestión energética que se encuentran más próximas al nivel de madurez de la factoría, a sus objetivos, al tipo de producción que desarrolla, etc. Así mismo esta tesis muestra que es posible obtener reducciones significativas de coste simplemente evitando los períodos de pico diario en el precio de la misma. Además la tesis permite identificar cómo el nivel de monitorización del consumo energético (es decir al nivel de máquina), el intervalo temporal, y el nivel del análisis de los datos son factores determinantes a la hora de localizar oportunidades para mejorar la eficiencia energética. Adicionalmente, la integración de datos de consumo energético en tiempo real con datos de producción (cuando existen altos niveles de estandarización en los procesos productivos y sus datos) es esencial para permitir que las factorías detallen la energía efectivamente consumida, su coste y CO2 emitido durante la producción de un producto o componente. Esto permite obtener una valiosa información a los gestores en el nivel decisor de la factoría así como a los consumidores y reguladores. ABSTRACT In today‘s manufacturing scenario, rising energy prices, increasing ecological awareness, and changing consumer behaviors are driving decision makers to prioritize green manufacturing. The Internet of Things (IoT) paradigm promises to increase the visibility and awareness of energy consumption, thanks to smart sensors and smart meters at the machine and production line level. Consequently, real-time energy consumption data from the manufacturing processes can be easily collected and then analyzed, to improve energy-aware decision-making. This thesis aims to investigate how to utilize the adoption of the Internet of Things at shop floor level to increase energy–awareness and the energy efficiency of discrete production processes. In order to achieve the main research goal, the research is divided into four sub-objectives, and is accomplished during four main phases (i.e., studies). In the first study, by relying on a comprehensive literature review and on experts‘ insights, the thesis defines energy-efficient production management practices that are enhanced and enabled by IoT technology. The first study also explains the benefits that can be obtained by adopting such management practices. Furthermore, it presents a framework to support the integration of gathered energy data into a company‘s information technology tools and platforms, which is done with the ultimate goal of highlighting how operational and tactical decision-making processes could leverage such data in order to improve energy efficiency. Considering the variable energy prices in one day, along with the availability of detailed machine status energy data, the second study proposes a mathematical model to minimize energy consumption costs for single machine production scheduling during production processes. This model works by making decisions at the machine level to determine the launch times for job processing, idle time, when the machine must be shut down, ―turning on‖ time, and ―turning off‖ time. This model enables the operations manager to implement the least expensive production schedule during a production shift. In the third study, the research provides a methodology to help managers implement the IoT at the production system level; it includes an analysis of current energy management and production systems at the factory, and recommends procedures for implementing the IoT to collect and analyze energy data. The methodology has been validated by a pilot study, where energy KPIs have been used to evaluate energy efficiency. In the fourth study, the goal is to introduce a way to achieve multi-level awareness of the energy consumed during production processes. The proposed method enables discrete factories to specify energy consumption, CO2 emissions, and the cost of the energy consumed at operation, production and order levels, while considering energy sources and fluctuations in energy prices. The results show that energy-efficient production management practices and decisions can be enhanced and enabled by the IoT. With the outcomes of the thesis, energy managers can approach the IoT adoption in a benefit-driven way, by addressing energy management practices that are close to the maturity level of the factory, target, production type, etc. The thesis also shows that significant reductions in energy costs can be achieved by avoiding high-energy price periods in a day. Furthermore, the thesis determines the level of monitoring energy consumption (i.e., machine level), the interval time, and the level of energy data analysis, which are all important factors involved in finding opportunities to improve energy efficiency. Eventually, integrating real-time energy data with production data (when there are high levels of production process standardization data) is essential to enable factories to specify the amount and cost of energy consumed, as well as the CO2 emitted while producing a product, providing valuable information to decision makers at the factory level as well as to consumers and regulators.
Resumo:
Mode of access: Internet.