879 resultados para Conflict-based method
Resumo:
Research has shown that disease-specific health related quality of life (HRQoL) instruments are more responsive than generic instruments to particular disease conditions. However, only a few studies have used disease-specific instruments to measure HRQoL in hemophilia. The goal of this project was to develop a disease-specific utility instrument that measures patient preferences for various hemophilia health states. The visual analog scale (VAS), a ranking method, and the standard gamble (SG), a choice-based method incorporating risk, were used to measure patient preferences. Study participants (n = 128) were recruited from the UT/Gulf States Hemophilia and Thrombophilia Center and stratified by age: 0–18 years and 19+. ^ Test retest reliability was demonstrated for both VAS and SG instruments: overall within-subject correlation coefficients were 0.91 and 0.79, respectively. Results showed statistically significant differences in responses between pediatric and adult participants when using the SG (p = .045). However, no significant differences were shown between these groups when using the VAS (p = .636). When responses to VAS and SG instruments were compared, statistically significant differences in both pediatric (p < .0001) and adult (p < .0001) groups were observed. Data from this study also demonstrated that persons with hemophilia with varying severity of disease, as well as those who were HIV infected, were able to evaluate a range of health states for hemophilia. This has important implications for the study of quality of life in hemophilia and the development of disease-specific HRQoL instruments. ^ The utility measures obtained from this study can be applied in economic evaluations that analyze the cost/utility of alternative hemophilia treatments. Results derived from the SG indicate that age can influence patients' preferences regarding their state of health. This may have implications for considering treatment options based on the mean age of the population under consideration. Although both instruments independently demonstrated reliability and validity, results indicate that the two measures may not be interchangeable. ^
Resumo:
Hypertension (HT) is mediated by the interaction of many genetic and environmental factors. Previous genome-wide linkage analysis studies have found many loci that show linkage to HT or blood pressure (BP) regulation, but the results were generally inconsistent. Gene by environment interaction is among the reasons that potentially explain these inconsistencies between studies. Here we investigate influences of gene by smoking (GxS) interaction on HT and BP in European American (EA), African American (AA) and Mexican American (MA) families from the GENOA study. A variance component-based method was utilized to perform genome-wide linkage analysis of systolic blood pressure (SBP), diastolic blood pressure (DBP), and HT status, as well as bivariate analysis for SBP and DBP for smokers, non-smokers, and combined groups. The most significant results were found for SBP in MA. The strongest signal was for chromosome 17q24 (LOD = 4.2), increased to (LOD = 4.7) in bivariate analysis but there was no evidence of GxS interaction at this locus (p = 0.48). Two signals were identified only in one group: on chromosome 15q26.2 (LOD = 3.37) in non-smokers and chromosome 7q21.11 (LOD = 1.4) in smokers, both of which had strong evidence for GxS interaction (p = 0.00039 and 0.009 respectively). There were also two other signals, one on chromosome 20q12 (LOD = 2.45) in smokers, which became much higher in the combined sample (LOD = 3.53), and one on chromosome 6p22.2 (LOD = 2.06) in non-smokers. Neither peak had very strong evidence for GxS interaction (p = 0.08 and 0.06 respectively). A fine mapping association study was performed using 200 SNPs in 30 genes located under the linkage signals on chromosomes 15 and 17. Under the chromosome 15 peak, the association analysis identified 6 SNPs accounting for a 7 mmHg increase in SBP in MA non-smokers. For the chromosome 17 linkage peak, the association analysis identified 3 SNPs accounting for a 6 mmHg increase in SBP in MA. However, none of these SNPs was significant after correcting for multiple testing, and accounting for them in the linkage analysis produced very small reductions in the linkage signal. ^ The linkage analysis of BP traits considering the smoking status produced very interesting signals for SBP in the MA population. The fine mapping association analysis gave some insight into the contribution of some SNPs to two of the identified signals, but since these SNPs did not remain significant after multiple testing correction and did not explain the linkage peaks, more work is needed to confirm these exploratory results and identify the culprit variations under these linkage peaks. ^
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^
Resumo:
We developed a new FPGA-based method for coincidence detection in positronemissiontomography. The method requires low device resources and no specific peripherals in order to resolve coincident digital pulses within a time window of a few nanoseconds. This method has been validated with a low-end Xilinx Spartan-3E and provided coincidence resolutions lower than 6 ns. This resolution depends directly on the signal propagation properties of the target device and the maximum available clock frequency, therefore it is expected to improve considerably on higher-end FPGAs.
Resumo:
Los arrays de ranuras son sistemas de antennas conocidos desde los años 40, principalmente destinados a formar parte de sistemas rádar de navíos de combate y grandes estaciones terrenas donde el tamaño y el peso no eran altamente restrictivos. Con el paso de los años y debido sobre todo a importantes avances en materiales y métodos de fabricación, el rango de aplicaciones de este tipo de sistemas radiantes creció en gran medida. Desde nuevas tecnologías biomédicas, sistemas anticolisión en automóviles y navegación en aviones, enlaces de comunicaciones de alta tasa binaria y corta distancia e incluso sistemas embarcados en satélites para la transmisión de señal de televisión. Dentro de esta familia de antennas, existen dos grupos que destacan por ser los más utilizados: las antennas de placas paralelas con las ranuras distribuidas de forma circular o espiral y las agrupaciones de arrays lineales construidos sobre guia de onda. Continuando con las tareas de investigación desarrolladas durante los últimos años en el Instituto de Tecnología de Tokyo y en el Grupo de Radiación de la Universidad Politécnica de Madrid, la totalidad de esta tesis se centra en este último grupo, aunque como se verá se separa en gran medida de las técnicas de diseño y metodologías convencionales. Los arrays de ranuras rectas y paralelas al eje de la guía rectangular que las alimenta son, sin ninguna duda, los modelos más empleados debido a la fiabilidad que presentan a altas frecuencias, su capacidad para gestionar grandes cantidades de potencia y la sencillez de su diseño y fabricación. Sin embargo, también presentan desventajas como estrecho ancho de banda en pérdidas de retorno y rápida degradación del diagrama de radiación con la frecuencia. Éstas son debidas a la naturaleza resonante de sus elementos radiantes: al perder la resonancia, el sistema global se desajusta y sus prestaciones degeneran. En arrays bidimensionales de slots rectos, el campo eléctrico queda polarizado sobre el plano transversal a las ranuras, correspondiéndose con el plano de altos lóbulos secundarios. Esta tesis tiene como objetivo el desarrollo de un método sistemático de diseño de arrays de ranuras inclinadas y desplazadas del centro (en lo sucesivo “ranuras compuestas”), definido en 1971 como uno de los desafíos a superar dentro del mundo del diseño de antennas. La técnica empleada se basa en el Método de los Momentos, la Teoría de Circuitos y la Teoría de Conexión Aleatoria de Matrices de Dispersión. Al tratarse de un método circuital, la primera parte de la tesis se corresponde con el estudio de la aplicabilidad de las redes equivalentes fundamentales, su capacidad para recrear fenómenos físicos de la ranura, las limitaciones y ventajas que presentan para caracterizar las diferentes configuraciones de slot compuesto. Se profundiza en las diferencias entre las redes en T y en ! y se condiciona la selección de una u otra dependiendo del tipo de elemento radiante. Una vez seleccionado el tipo de red a emplear en el diseño del sistema, se ha desarrollado un algoritmo de cascadeo progresivo desde el puerto alimentador hacia el cortocircuito que termina el modelo. Este algoritmo es independiente del número de elementos, la frecuencia central de funcionamiento, del ángulo de inclinación de las ranuras y de la red equivalente seleccionada (en T o en !). Se basa en definir el diseño del array como un Problema de Satisfacción de Condiciones (en inglés, Constraint Satisfaction Problem) que se resuelve por un método de Búsqueda en Retroceso (Backtracking algorithm). Como resultado devuelve un circuito equivalente del array completo adaptado a su entrada y cuyos elementos consumen una potencia acorde a una distribución de amplitud dada para el array. En toda agrupación de antennas, el acoplo mutuo entre elementos a través del campo radiado representa uno de los principales problemas para el ingeniero y sus efectos perjudican a las prestaciones globales del sistema, tanto en adaptación como en capacidad de radiación. El empleo de circuito equivalente se descartó por la dificultad que suponía la caracterización de estos efectos y su inclusión en la etapa de diseño. En esta tesis doctoral el acoplo también se ha modelado como una red equivalente cuyos elementos son transformadores ideales y admitancias, conectada al conjunto de redes equivalentes que representa el array. Al comparar los resultados estimados en términos de pérdidas de retorno y radiación con aquellos obtenidos a partir de programas comerciales populares como CST Microwave Studio se confirma la validez del método aquí propuesto, el primer método de diseño sistemático de arrays de ranuras compuestos alimentados por guía de onda rectangular. Al tratarse de ranuras no resonantes, el ancho de banda en pérdidas de retorno es mucho mas amplio que el que presentan arrays de slots rectos. Para arrays bidimensionales, el ángulo de inclinación puede ajustarse de manera que el campo quede polarizado en los planos de bajos lóbulos secundarios. Además de simulaciones se han diseñado, construido y medido dos prototipos centrados en la frecuencia de 12GHz, de seis y diez elementos. Las medidas de pérdidas de retorno y diagrama de radiación revelan excelentes resultados, certificando la bondad del método genuino Method of Moments - Forward Matching Procedure desarrollado a lo largo de esta tésis. Abstract The slot antenna arrays are well known systems from the decade of 40s, mainly intended to be part of radar systems of large warships and terrestrial stations where size and weight were not highly restrictive. Over the years, mainly due to significant advances in materials and manufacturing methods, the range of applications of this type of radiating systems grew significantly. From new biomedical technologies, collision avoidance systems in cars and aircraft navigation, short communication links with high bit transfer rate and even embedded systems in satellites for television broadcast. Within this family of antennas, two groups stand out as being the most frequent in the literature: parallel plate antennas with slots placed in a circular or spiral distribution and clusters of waveguide linear arrays. To continue the vast research work carried out during the last decades in the Tokyo Institute of Technology and in the Radiation Group at the Universidad Politécnica de Madrid, this thesis focuses on the latter group, although it represents a technique that drastically breaks with traditional design methodologies. The arrays of slots straight and parallel to the axis of the feeding rectangular waveguide are without a doubt the most used models because of the reliability that they present at high frequencies, its ability to handle large amounts of power and their simplicity of design and manufacturing. However, there also exist disadvantages as narrow bandwidth in return loss and rapid degradation of the radiation pattern with frequency. These are due to the resonant nature of radiating elements: away from the resonance status, the overall system performance and radiation pattern diminish. For two-dimensional arrays of straight slots, the electric field is polarized transverse to the radiators, corresponding to the plane of high side-lobe level. This thesis aims to develop a systematic method of designing arrays of angled and displaced slots (hereinafter "compound slots"), defined in 1971 as one of the challenges to overcome in the world of antenna design. The used technique is based on the Method of Moments, Circuit Theory and the Theory of Scattering Matrices Connection. Being a circuitry-based method, the first part of this dissertation corresponds to the study of the applicability of the basic equivalent networks, their ability to recreate the slot physical phenomena, their limitations and advantages presented to characterize different compound slot configurations. It delves into the differences of T and ! and determines the selection of the most suitable one depending on the type of radiating element. Once the type of network to be used in the system design is selected, a progressive algorithm called Forward Matching Procedure has been developed to connect the proper equivalent networks from the feeder port to shorted ending. This algorithm is independent of the number of elements, the central operating frequency, the angle of inclination of the slots and selected equivalent network (T or ! networks). It is based on the definition of the array design as a Constraint Satisfaction Problem, solved by means of a Backtracking Algorithm. As a result, the method returns an equivalent circuit of the whole array which is matched at its input port and whose elements consume a power according to a given amplitude distribution for the array. In any group of antennas, the mutual coupling between elements through the radiated field represents one of the biggest problems that the engineer faces and its effects are detrimental to the overall performance of the system, both in radiation capabilities and return loss. The employment of an equivalent circuit for the array design was discarded by some authors because of the difficulty involved in the characterization of the coupling effects and their inclusion in the design stage. In this thesis the coupling has also been modeled as an equivalent network whose elements are ideal transformers and admittances connected to the set of equivalent networks that represent the antennas of the array. By comparing the estimated results in terms of return loss and radiation with those obtained from popular commercial software as CST Microwave Studio, the validity of the proposed method is fully confirmed, representing the first method of systematic design of compound-slot arrays fed by rectangular waveguide. Since these slots do not work under the resonant status, the bandwidth in return loss is much wider than the longitudinal-slot arrays. For the case of two-dimensional arrays, the angle of inclination can be adjusted so that the field is polarized at the low side-lobe level plane. Besides the performed full-wave simulations two prototypes of six and ten elements for the X-band have been designed, built and measured, revealing excellent results and agreement with the expected results. These facts certify that the genuine technique Method of Moments - Matching Forward Procedure developed along this thesis is valid and trustable.
Resumo:
The development of reliable clonal propagation technologies is a requisite for performing Multi-Varietal Forestry (MVF). Somatic embryogenesis is considered the tissue culture based method more suitable for operational breeding of forest trees. Vegetative propagation is very difficult when tissues are taken from mature donors, making clonal propagation of selected trees almost impossible. We have been able to induce somatic embryogenesis in leaves taken from mature oak trees, including cork oak (Quercus suber). This important species of the Mediterranean ecosystem produces cork regularly, conferring to this species a significant economic value. In a previous paper we reported the establishment of a field trial to compare the growth of plants of somatic origin vs zygotic origin, and somatic plants from mature trees vs somatic plants from juvenile seedlings. For that purpose somatic seedlings were regenerated from five selected cork oak trees and from young plants of their half-sib progenies by somatic embryogenesis. They were planted in the field together with acorn-derived plants of the same families. After the first growth period, seedlings of zygotic origin doubled the height of somatic seedlings, showing somatic plants of adult and juvenile origin similar growth. Here we provide data on height and diameter increases after two additional growth periods. In the second one, growth parameters of zygotic seedlings were also significantly higher than those of somatic ones, but there were not significant differences in height increase between seedlings and somatic plants of mature origin. In the third growth period, height and diameter increases of somatic seedlings cloned from the selected trees did not differ from those of zygotic seedlings, which were still higher than data from plants obtained from somatic embryos from the sexual progeny. Therefore, somatic seedlings from mature origin seem not to be influenced by a possible ageing effect, and plants from somatic embryos tend to minimize the initial advantage of plants from acorns
Resumo:
Con esta tesis ”Desarrollo de una Teoría Uniforme de la Difracción para el Análisis de los Campos Electromagnéticos Dispersados y Superficiales sobre un Cilindro” hemos iniciado una nueva línea de investigación que trata de responder a la siguiente pregunta: ¿cuál es la impedancia de superficie que describe una estructura de conductor eléctrico perfecto (PEC) convexa recubierta por un material no conductor? Este tipo de estudios tienen interés hoy en día porque ayudan a predecir el campo electromagnético incidente, radiado o que se propaga sobre estructuras metálicas y localmente convexas que se encuentran recubiertas de algún material dieléctrico, o sobre estructuras metálicas con pérdidas, como por ejemplo se necesita en determinadas aplicaciones aeroespaciales, marítimas o automovilísticas. Además, desde un punto de vista teórico, la caracterización de la impedancia de superficie de una estructura PEC recubierta o no por un dieléctrico es una generalización de varias soluciones que tratan ambos tipos de problemas por separado. En esta tesis se desarrolla una teoría uniforme de la difracción (UTD) para analizar el problema canónico del campo electromagnético dispersado y superficial en un cilindro circular eléctricamente grande con una condición de contorno de impedancia (IBC) para frecuencias altas. Construir una solución basada en UTD para este problema canónico es crucial en el desarrollo de un método UTD para el caso más general de una superficie arbitrariamente convexa, mediante el uso del principio de localización de los campos electromagnéticos a altas frecuencias. Esta tesis doctoral se ha llevado a cabo a través de una serie de hitos que se enumeran a continuación, enfatizando las contribuciones a las que ha dado lugar. Inicialmente se realiza una revisión en profundidad del estado del arte de los métodos asintóticos con numerosas referencias. As í, cualquier lector novel puede llegar a conocer la historia de la óptica geométrica (GO) y la teoría geométrica de la difracción (GTD), que dieron lugar al desarrollo de la UTD. Después, se investiga ampliamente la UTD y los trabajos más importantes que pueden encontrarse en la literatura. As í, este capítulo, nos coloca en la posición de afirmar que, hasta donde nosotros conocemos, nadie ha intentado antes llevar a cabo una investigación rigurosa sobre la caracterización de la impedancia de superficie de una estructura PEC recubierta por un material dieléctrico, utilizando para ello la UTD. Primero, se desarrolla una UTD para el problema canónico de la dispersión electromagnética de un cilindro circular eléctricamente grande con una IBC uniforme, cuando es iluminado por una onda plana con incidencia oblicua a frecuencias altas. La solución a este problema canónico se construye a partir de una solución exacta mediante una expansión de autofunciones de propagación radial. Entonces, ésta se convierte en una nueva expansión de autofunciones de propagación circunferencial muy apropiada para cilindros grandes, a través de la transformación de Watson. De esta forma, la expresión del campo se reduce a una integral que se evalúa asintóticamente, para altas frecuencias, de manera uniforme. El resultado se expresa según el trazado de rayos descrito en la UTD. La solución es uniforme porque tiene la importante propiedad de mantenerse continua a lo largo de la región de transición, a ambos lados de la superficie del contorno de sombra. Fuera de la región de transición la solución se reduce al campo incidente y reflejado puramente ópticos en la región iluminada del cilindro, y al campo superficial difractado en la región de sombra. Debido a la IBC el campo dispersado contiene una componente contrapolar a causa de un acoplamiento entre las ondas TEz y TMz (donde z es el eje del cilindro). Esta componente contrapolar desaparece cuando la incidencia es normal al cilindro, y también en la región iluminada cuando la incidencia es oblicua donde el campo se reduce a la solución de GO. La solución UTD presenta una muy buena exactitud cuando se compara numéricamente con una solución de referencia exacta. A continuación, se desarrolla una IBC efectiva para el cálculo del campo electromagnético dispersado en un cilindro circular PEC recubierto por un dieléctrico e iluminado por una onda plana incidiendo oblicuamente. Para ello se derivan dos impedancias de superficie en relación directa con las ondas creeping y de superficie TM y TE que se excitan en un cilindro recubierto por un material no conductor. Las impedancias de superficie TM y TE están acopladas cuando la incidencia es oblicua, y dependen de la geometría del problema y de los números de onda. Además, se ha derivado una impedancia de superficie constante, aunque con diferente valor cuando el observador se encuentra en la zona iluminada o en la zona de sombra. Después, se presenta una solución UTD para el cálculo de la dispersión de una onda plana con incidencia oblicua sobre un cilindro eléctricamente grande y convexo, mediante la generalización del problema canónico correspondiente al cilindro circular. La solución asintótica es uniforme porque se mantiene continua a lo largo de la región de transición, en las inmediaciones del contorno de sombra, y se reduce a la solución de rayos ópticos en la zona iluminada y a la contribución de las ondas de superficie dentro de la zona de sombra, lejos de la región de transición. Cuando se usa cualquier material no conductor se excita una componente contrapolar que tiende a desaparecer cuando la incidencia es normal al cilindro y en la región iluminada. Se discuten ampliamente las limitaciones de las fórmulas para la impedancia de superficie efectiva, y se compara la solución UTD con otras soluciones de referencia, donde se observa una muy buena concordancia. Y en tercer lugar, se presenta una aproximación para una impedancia de superficie efectiva para el cálculo de los campos superficiales en un cilindro circular conductor recubierto por un dieléctrico. Se discuten las principales diferencias que existen entre un cilindro PEC recubierto por un dieléctrico desde un punto de vista riguroso y un cilindro con una IBC. Mientras para un cilindro de impedancia se considera una impedancia de superficie constante o uniforme, para un cilindro conductor recubierto por un dieléctrico se derivan dos impedancias de superficie. Estas impedancias de superficie están asociadas a los modos de ondas creeping TM y TE excitadas en un cilindro, y dependen de la posición y de la orientación del observador y de la fuente. Con esto en mente, se deriva una solución UTD con IBC para los campos superficiales teniendo en cuenta las dependencias de la impedancia de superficie. La expansión asintótica se realiza, mediante la transformación de Watson, sobre la representación en serie de las funciones de Green correspondientes, evitando as í calcular las derivadas de orden superior de las integrales de tipo Fock, y dando lugar a una solución rápida y precisa. En los ejemplos numéricos realizados se observa una muy buena precisión cuando el cilindro y la separación entre el observador y la fuente son grandes. Esta solución, junto con el método de los momentos (MoM), se puede aplicar para el cálculo eficiente del acoplamiento mutuo de grandes arrays conformados de antenas de parches. Los métodos propuestos basados en UTD para el cálculo del campo electromagnético dispersado y superficial sobre un cilindro PEC recubierto de dieléctrico con una IBC efectiva suponen un primer paso hacia la generalización de una solución UTD para superficies metálicas convexas arbitrarias cubiertas por un material no conductor e iluminadas por una fuente electromagnética arbitraria. ABSTRACT With this thesis ”Development of a Uniform Theory of Diffraction for Scattered and Surface Electromagnetic Field Analysis on a Cylinder” we have initiated a line of investigation whose goal is to answer the following question: what is the surface impedance which describes a perfect electric conductor (PEC) convex structure covered by a material coating? These studies are of current and future interest for predicting the electromagnetic (EM) fields incident, radiating or propagating on locally smooth convex parts of highly metallic structures with a material coating, or by a lossy metallic surfaces, as for example in aerospace, maritime and automotive applications. Moreover, from a theoretical point of view, the surface impedance characterization of PEC surfaces with or without a material coating represents a generalization of independent solutions for both type of problems. A uniform geometrical theory of diffraction (UTD) is developed in this thesis for analyzing the canonical problem of EM scattered and surface field by an electrically large circular cylinder with an impedance boundary condition (IBC) in the high frequency regime, by means of a surface impedance characterization. The construction of a UTD solution for this canonical problem is crucial for the development of the corresponding UTD solution for the more general case of an arbitrary smooth convex surface, via the principle of the localization of high frequency EM fields. The development of the present doctoral thesis has been carried out through a series of landmarks that are enumerated as follows, emphasizing the main contributions that this work has given rise to. Initially, a profound revision is made in the state of art of asymptotic methods where numerous references are given. Thus, any reader may know the history of geometrical optics (GO) and geometrical theory of diffraction (GTD), which led to the development of UTD. Then, the UTD is deeply investigated and the main studies which are found in the literature are shown. This chapter situates us in the position to state that, as far as we know, nobody has attempted before to perform a rigorous research about the surface impedance characterization for material-coated PEC convex structures via UTD. First, a UTD solution is developed for the canonical problem of the EM scattering by an electrically large circular cylinder with a uniform IBC, when it is illuminated by an obliquely incident high frequency plane wave. A solution to this canonical problem is first constructed in terms of an exact formulation involving a radially propagating eigenfunction expansion. The latter is converted into a circumferentially propagating eigenfunction expansion suited for large cylinders, via the Watson transformation, which is expressed as an integral that is subsequently evaluated asymptotically, for high frequencies, in a uniform manner. The resulting solution is then expressed in the desired UTD ray form. This solution is uniform in the sense that it has the important property that it remains continuous across the transition region on either side of the surface shadow boundary. Outside the shadow boundary transition region it recovers the purely ray optical incident and reflected ray fields on the deep lit side of the shadow boundary and to the modal surface diffracted ray fields on the deep shadow side. The scattered field is seen to have a cross-polarized component due to the coupling between the TEz and TMz waves (where z is the cylinder axis) resulting from the IBC. Such cross-polarization vanishes for normal incidence on the cylinder, and also in the deep lit region for oblique incidence where it properly reduces to the GO or ray optical solution. This UTD solution is shown to be very accurate by a numerical comparison with an exact reference solution. Then, an effective IBC is developed for the EM scattered field on a coated PEC circular cylinder illuminated by an obliquely incident plane wave. Two surface impedances are derived in a direct relation with the TM and TE surface and creeping wave modes excited on a coated cylinder. The TM and TE surface impedances are coupled at oblique incidence, and depend on the geometry of the problem and the wave numbers. Nevertheless, a constant surface impedance is found, although with a different value when the observation point lays in the lit or in the shadow region. Then, a UTD solution for the scattering of an obliquely incident plane wave on an electrically large smooth convex coated PEC cylinder is introduced, via a generalization of the canonical circular cylinder problem. The asymptotic solution is uniform because it remains continuous across the transition region, in the vicinity of the shadow boundary, and it recovers the ray optical solution in the deep lit region and the creeping wave formulation within the deep shadow region. When a coating is present a cross-polar field term is excited, which vanishes at normal incidence and in the deep lit region. The limitations of the effective surface impedance formulas are discussed, and the UTD solution is compared with some reference solutions where a very good agreement is met. And in third place, an effective surface impedance approach is introduced for determining surface fields on an electrically large coated metallic circular cylinder. Differences in analysis of rigorouslytreated coated metallic cylinders and cylinders with an IBC are discussed. While for the impedance cylinder case a single constant or uniform surface impedance is considered, for the coated metallic cylinder case two surface impedances are derived. These are associated with the TM and TE creeping wave modes excited on a cylinder and depend on observation and source positions and orientations. With this in mind, a UTD based method with IBC is derived for the surface fields by taking into account the surface impedance variation. The asymptotic expansion is performed, via the Watson transformation, over the appropriate series representation of the Green’s functions, thus avoiding higher-order derivatives of Fock-type integrals, and yielding a fast and an accurate solution. Numerical examples reveal a very good accuracy for large cylinders when the separation between the observation and the source point is large. Thus, this solution could be efficiently applied in mutual coupling analysis, along with the method of moments (MoM), of large conformal microstrip array antennas. The proposed UTD methods for scattered and surface EM field analysis on a coated PEC cylinder with an effective IBC are considered the first steps toward the generalization of a UTD solution for large arbitrarily convex smooth metallic surfaces covered by a material coating and illuminated by an arbitrary EM source.
Resumo:
Background and aims The high metal bioavailability and the poor conditions of mine soils yield a low plant biomass, limiting the application of phytoremediation techniques. A greenhouse experiment was performed to evaluate the effects of organic amendments on metal stabilization and the potential of Brassica juncea L. for phytostabilization in mine soils. Methods Plants were grown in pots filled with soils collected from two mine sites located in Central Spain mixed with 0, 30 and 60 tha?1 of pine bark compost and horse- and sheep-manure compost. Plant biomass and metal concentrations in roots and shoots were measured. Metal bioavailability was assessed using a rhizosphere-based method (rhizo), which consists of a mixture of low-molecular-weight organic acids to simulate root exudates. Results Manure reduced metal concentrations in shoots (10?50 % reduction of Cu and 40?80 % of Zn in comparison with non-amended soils), bioconcentration factor (10?50 % of Cu and 40?80 % of Zn) and metal bioavailability in soil (40?50 % of Cu and 10?30 % of Zn) due to the high pH and the contribution of organic matter. Manure improved soil fertility and was also able to increase plant biomass (5?20 times in shoots and 3?30 times in roots), which resulted in a greater amount of metals removed from soil and accumulated in roots (increase of 2?7 times of Cu and Zn). Plants grown in pine bark treatments and in non-amended soils showed a limited biomass and high metal concentrations in shoots. Conclusions The addition of manure could be effective for the stabilization of metals and for enhancing the phytostabilization ability of B. juncea in mine soils. In this study, this species resulted to be a potential candidate for phytostabilization in combination with manure, differing from previous results, in which B. juncea had been recognized as a phytoextraction plant.
Resumo:
Effective data summarization methods that use AI techniques can help humans understand large sets of data. In this paper, we describe a knowledge-based method for automatically generating summaries of geospatial and temporal data, i.e. data with geographical and temporal references. The method is useful for summarizing data streams, such as GPS traces and traffic information, that are becoming more prevalent with the increasing use of sensors in computing devices. The method presented here is an initial architecture for our ongoing research in this domain. In this paper we describe the data representations we have designed for our method, our implementations of components to perform data abstraction and natural language generation. We also discuss evaluation results that show the ability of our method to generate certain types of geospatial and temporal descriptions.
Resumo:
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and we demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF-based method.
Resumo:
Los sistemas transaccionales tales como los programas informáticos para la planificación de recursos empresariales (ERP software) se han implementado ampliamente mientras que los sistemas analíticos para la gestión de la cadena de suministro (SCM software) no han tenido el éxito deseado por la industria de tecnología de información (TI). Aunque se documentan beneficios importantes derivados de las implantaciones de SCM software, las empresas industriales son reacias a invertir en este tipo de sistemas. Por una parte esto es debido a la falta de métodos que son capaces de detectar los beneficios por emplear esos sistemas, y por otra parte porque el coste asociado no está identificado, detallado y cuantificado suficientemente. Los esquemas de coordinación basados únicamente en sistemas ERP son alternativas válidas en la práctica industrial siempre que la relación coste-beneficio esta favorable. Por lo tanto, la evaluación de formas organizativas teniendo en cuenta explícitamente el coste debido a procesos administrativos, en particular por ciclos iterativos, es de gran interés para la toma de decisiones en el ámbito de inversiones en TI. Con el fin de cerrar la brecha, el propósito de esta investigación es proporcionar métodos de evaluación que permitan la comparación de diferentes formas de organización y niveles de soporte por sistemas informáticos. La tesis proporciona una amplia introducción, analizando los retos a los que se enfrenta la industria. Concluye con las necesidades de la industria de SCM software: unas herramientas que facilitan la evaluación integral de diferentes propuestas de organización. A continuación, la terminología clave se detalla centrándose en la teoría de la organización, las peculiaridades de inversión en TI y la tipología de software de gestión de la cadena de suministro. La revisión de la literatura clasifica las contribuciones recientes sobre la gestión de la cadena de suministro, tratando ambos conceptos, el diseño de la organización y su soporte por las TI. La clasificación incluye criterios relacionados con la metodología de la investigación y su contenido. Los estudios empíricos en el ámbito de la administración de empresas se centran en tipologías de redes industriales. Nuevos algoritmos de planificación y esquemas de coordinación innovadoras se desarrollan principalmente en el campo de la investigación de operaciones con el fin de proponer nuevas funciones de software. Artículos procedentes del área de la gestión de la producción se centran en el análisis de coste y beneficio de las implantaciones de sistemas. La revisión de la literatura revela que el éxito de las TI para la coordinación de redes industriales depende en gran medida de características de tres dimensiones: la configuración de la red industrial, los esquemas de coordinación y las funcionalidades del software. La literatura disponible está enfocada sobre todo en los beneficios de las implantaciones de SCM software. Sin embargo, la coordinación de la cadena de suministro, basándose en el sistema ERP, sigue siendo la práctica industrial generalizada, pero el coste de coordinación asociado no ha sido abordado por los investigadores. Los fundamentos de diseño organizativo eficiente se explican en detalle en la medida necesaria para la comprensión de la síntesis de las diferentes formas de organización. Se han generado varios esquemas de coordinación variando los siguientes parámetros de diseño: la estructura organizativa, los mecanismos de coordinación y el soporte por TI. Las diferentes propuestas de organización desarrolladas son evaluadas por un método heurístico y otro basado en la simulación por eventos discretos. Para ambos métodos, se tienen en cuenta los principios de la teoría de la organización. La falta de rendimiento empresarial se debe a las dependencias entre actividades que no se gestionan adecuadamente. Dentro del método heurístico, se clasifican las dependencias y se mide su intensidad basándose en factores contextuales. A continuación, se valora la idoneidad de cada elemento de diseño organizativo para cada dependencia específica. Por último, cada forma de organización se evalúa basándose en la contribución de los elementos de diseño tanto al beneficio como al coste. El beneficio de coordinación se refiere a la mejora en el rendimiento logístico - este concepto es el objeto central en la mayoría de modelos de evaluación de la gestión de la cadena de suministro. Por el contrario, el coste de coordinación que se debe incurrir para lograr beneficios no se suele considerar en detalle. Procesos iterativos son costosos si se ejecutan manualmente. Este es el caso cuando SCM software no está implementada y el sistema ERP es el único instrumento de coordinación disponible. El modelo heurístico proporciona un procedimiento simplificado para la clasificación sistemática de las dependencias, la cuantificación de los factores de influencia y la identificación de configuraciones que indican el uso de formas organizativas y de soporte de TI más o menos complejas. La simulación de eventos discretos se aplica en el segundo modelo de evaluación utilizando el paquete de software ‘Plant Simulation’. Con respecto al rendimiento logístico, por un lado se mide el coste de fabricación, de inventario y de transporte y las penalizaciones por pérdida de ventas. Por otro lado, se cuantifica explícitamente el coste de la coordinación teniendo en cuenta los ciclos de coordinación iterativos. El método se aplica a una configuración de cadena de suministro ejemplar considerando diversos parámetros. Los resultados de la simulación confirman que, en la mayoría de los casos, el beneficio aumenta cuando se intensifica la coordinación. Sin embargo, en ciertas situaciones en las que se aplican ciclos de planificación manuales e iterativos el coste de coordinación adicional no siempre conduce a mejor rendimiento logístico. Estos resultados inesperados no se pueden atribuir a ningún parámetro particular. La investigación confirma la gran importancia de nuevas dimensiones hasta ahora ignoradas en la evaluación de propuestas organizativas y herramientas de TI. A través del método heurístico se puede comparar de forma rápida, pero sólo aproximada, la eficiencia de diferentes formas de organización. Por el contrario, el método de simulación es más complejo pero da resultados más detallados, teniendo en cuenta parámetros específicos del contexto del caso concreto y del diseño organizativo. ABSTRACT Transactional systems such as Enterprise Resource Planning (ERP) systems have been implemented widely while analytical software like Supply Chain Management (SCM) add-ons are adopted less by manufacturing companies. Although significant benefits are reported stemming from SCM software implementations, companies are reluctant to invest in such systems. On the one hand this is due to the lack of methods that are able to detect benefits from the use of SCM software and on the other hand associated costs are not identified, detailed and quantified sufficiently. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment in IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and corresponding methods are comprehensive tools for strategic IT decision making. The purpose of this research is to provide evaluation methods that allow the comparison of different organizational forms and software support levels. The research begins with a comprehensive introduction dealing with the business environment that industrial networks are facing and concludes highlighting the challenges for the supply chain software industry. Afterwards, the central terminology is addressed, focusing on organization theory, IT investment peculiarities and supply chain management software typology. The literature review classifies recent supply chain management research referring to organizational design and its software support. The classification encompasses criteria related to research methodology and content. Empirical studies from management science focus on network types and organizational fit. Novel planning algorithms and innovative coordination schemes are developed mostly in the field of operations research in order to propose new software features. Operations and production management researchers realize cost-benefit analysis of IT software implementations. The literature review reveals that the success of software solutions for network coordination depends strongly on the fit of three dimensions: network configuration, coordination scheme and software functionality. Reviewed literature is mostly centered on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but the associated coordination cost has not been addressed by researchers. Fundamentals of efficient organizational design are explained in detail as far as required for the understanding of the synthesis of different organizational forms. Several coordination schemes have been shaped through the variation of the following design parameters: organizational structuring, coordination mechanisms and software support. The different organizational proposals are evaluated using a heuristic approach and a simulation-based method. For both cases, the principles of organization theory are respected. A lack of performance is due to dependencies between activities which are not managed properly. Therefore, within the heuristic method, dependencies are classified and their intensity is measured based on contextual factors. Afterwards the suitability of each organizational design element for the management of a specific dependency is determined. Finally, each organizational form is evaluated based on the contribution of the sum of design elements to coordination benefit and to coordination cost. Coordination benefit refers to improvement in logistic performance – this is the core concept of most supply chain evaluation models. Unfortunately, coordination cost which must be incurred to achieve benefits is usually not considered in detail. Iterative processes are costly when manually executed. This is the case when SCM software is not implemented and the ERP system is the only available coordination instrument. The heuristic model provides a simplified procedure for the classification of dependencies, quantification of influence factors and systematic search for adequate organizational forms and IT support. Discrete event simulation is applied in the second evaluation model using the software package ‘Plant Simulation’. On the one hand logistic performance is measured by manufacturing, inventory and transportation cost and penalties for lost sales. On the other hand coordination cost is explicitly considered taking into account iterative coordination cycles. The method is applied to an exemplary supply chain configuration considering various parameter settings. The simulation results confirm that, in most cases, benefit increases when coordination is intensified. However, in some situations when manual, iterative planning cycles are applied, additional coordination cost does not always lead to improved logistic performance. These unexpected results cannot be attributed to any particular parameter. The research confirms the great importance of up to now disregarded dimensions when evaluating SCM concepts and IT tools. The heuristic method provides a quick, but only approximate comparison of coordination efficiency for different organizational forms. In contrast, the more complex simulation method delivers detailed results taking into consideration specific parameter settings of network context and organizational design.
Resumo:
Epidemiological evidence has suggested that some pediatric leukemias may be initiated in utero and, for some pairs of identical twins with concordant leukemia, this possibility has been strongly endorsed by molecular studies of clonality. Direct evidence for a prenatal origin can only be derived by prospective or retrospective detection of leukemia-specific molecular abnormalities in fetal or newborn samples. We report a PCR-based method that has been developed to scrutinize neonatal blood spots (Guthrie cards) for the presence of numerically infrequent leukemic cells at birth in individuals who subsequently developed leukemia. We demonstrate that unique or clonotypic MLL-AF4 genomic fusion sequences are present and detectable in neonatal blood spots from individuals who were diagnosed with acute lymphoblastic leukemia at ages 5 months to 2 years and, therefore, have arisen during fetal hematopoiesis in utero. This result provides unequivocal evidence for a prenatal initiation of acute leukemia in young patients. The method should be applicable to other fusion genes in children with common subtypes of leukemia and will be of value in attempts to unravel the natural history and etiology of this major subtype of pediatric cancer.
Resumo:
Macromolecular transport systems in bacteria currently are classified by function and sequence comparisons into five basic types. In this classification system, type II and type IV secretion systems both possess members of a superfamily of genes for putative NTP hydrolase (NTPase) proteins that are strikingly similar in structure, function, and sequence. These include VirB11, TrbB, TraG, GspE, PilB, PilT, and ComG1. The predicted protein product of tadA, a recently discovered gene required for tenacious adherence of Actinobacillus actinomycetemcomitans, also has significant sequence similarity to members of this superfamily and to several unclassified and uncharacterized gene products of both Archaea and Bacteria. To understand the relationship of tadA and tadA-like genes to those encoding the putative NTPases of type II/IV secretion, we used a phylogenetic approach to obtain a genealogy of 148 NTPase genes and reconstruct a scenario of gene superfamily evolution. In this phylogeny, clear distinctions can be made between type II and type IV families and their constituent subfamilies. In addition, the subgroup containing tadA constitutes a novel and extremely widespread subfamily of the family encompassing all putative NTPases of type IV secretion systems. We report diagnostic amino acid residue positions for each major monophyletic family and subfamily in the phylogenetic tree, and we propose an easy method for precisely classifying and naming putative NTPase genes based on phylogeny. This molecular key-based method can be applied to other gene superfamilies and represents a valuable tool for genome analysis.
Resumo:
A key step in the regulation of networks that control gene expression is the sequence-specific binding of transcription factors to their DNA recognition sites. A more complete understanding of these DNA–protein interactions will permit a more comprehensive and quantitative mapping of the regulatory pathways within cells, as well as a deeper understanding of the potential functions of individual genes regulated by newly identified DNA-binding sites. Here we describe a DNA microarray-based method to characterize sequence-specific DNA recognition by zinc-finger proteins. A phage display library, prepared by randomizing critical amino acid residues in the second of three fingers of the mouse Zif268 domain, provided a rich source of zinc-finger proteins with variant DNA-binding specificities. Microarrays containing all possible 3-bp binding sites for the variable zinc fingers permitted the quantitation of the binding site preferences of the entire library, pools of zinc fingers corresponding to different rounds of selection from this library, as well as individual Zif268 variants that were isolated from the library by using specific DNA sequences. The results demonstrate the feasibility of using DNA microarrays for genome-wide identification of putative transcription factor-binding sites.
Resumo:
Matrix-assisted laser desorption/ionization (MALDI) time of flight mass spectrometry was used to detect and order DNA fragments generated by Sanger dideoxy cycle sequencing. This was accomplished by improving the sensitivity and resolution of the MALDI method using a delayed ion extraction technique (DE-MALDI). The cycle sequencing chemistry was optimized to produce as much as 100 fmol of each specific dideoxy terminated fragment, generated from extension of a 13-base primer annealed on 40- and 50-base templates. Analysis of the resultant sequencing mixture by DE-MALDI identified the appropriate termination products. The technique provides a new non-gel-based method to sequence DNA which may ultimately have considerable speed advantages over traditional methodologies.