926 resultados para complex polymerization method
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
In my recent experimental research of wholesale electricity auctions, I discovered that the complex structure of the offers leaves a lot of room for strategic behavior, which consequently leads to anti- competitive and inefficient outcomes in the market. A specific feature of these complex-offer auctions is that the sellers submit not only the quantities and the minimum prices at which they are willing to sell, but also the start-up fees that are designed to reimburse the fixed start-up costs of the generation plants. In this paper, using the experimental method I compare the performance of two complex-offer auctions (COAs) against the performance of a simple-offer auction (SOA), in which the sellers have to recover all their generation costs --- fixed and variable ---through a uniform market-clearing price. I find that the SOA significantly reduces consumer prices and lowers price volatility. It mitigates anti-competitive effects that are present in the COAs and achieves allocative efficiency more quickly.
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^
Resumo:
The Houston region is home to arguably the largest petrochemical and refining complex anywhere. The effluent of this complex includes many potentially hazardous compounds. Study of some of these compounds has led to recognition that a number of known and probable carcinogens are at elevated levels in ambient air. Two of these, benzene and 1,3-butadiene, have been found in concentrations which may pose health risk for residents of Houston.^ Recent popular journalism and publications by local research institutions has increased the interest of the public in Houston's air quality. Much of the literature has been critical of local regulatory agencies' oversight of industrial pollution. A number of citizens in the region have begun to volunteer with air quality advocacy groups in the testing of community air. Inexpensive methods exist for monitoring of ozone, particulate matter and airborne toxic ambient concentrations. This study is an evaluation of a technique that has been successfully applied to airborne toxics.^ This technique, solid phase microextraction (SPME), has been used to measure airborne volatile organic hydrocarbons at community-level concentrations. It is has yielded accurate and rapid concentration estimates at a relatively low cost per sample. Examples of its application to measurement of airborne benzene exist in the literature. None have been found for airborne 1,3-butadiene. These compounds were selected for an evaluation of SPME as a community-deployed technique, to replicate previous application to benzene, to expand application to 1,3-butadiene and due to the salience of these compounds in this community. ^ This study demonstrates that SPME is a useful technique for quantification of 1,3-butadiene at concentrations observed in Houston. Laboratory background levels precluded recommendation of the technique for benzene. One type of SPME fiber, 85 μm Carboxen/PDMS, was found to be a sensitive sampling device for 1,3-butadiene under temperature and humidity conditions common in Houston. This study indicates that these variables affect instrument response. This suggests the necessity of calibration within specific conditions of these variables. While deployment of this technique was less expensive than other methods of quantification of 1,3-butadiene, the complexity of calibration may exclude an SPME method from broad deployment by community groups.^
Resumo:
The Centers for Disease Control estimates that foodborne diseases cause approximately 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths in the United States each year. The American public is becoming more health conscious and there has been an increase in the dietary intake of fresh fruits and vegetables. Affluence and demand for convenience has allowed consumers to opt for pre-processed packaged fresh fruits and vegetables. These pre-processed foods are considered Ready-to-Eat. They have many of the advantages of fresh produce without the inconvenience of processing at home. After seeing a decline in food-related illnesses between 1996 and 2004, due to an improvement in meat and poultry safety, tainted produce has tilted the numbers back. This has resulted in none of the Healthy People 2010 targets for food-related illness reduction being reached. Irradiation has been shown to be effective in eliminating many of the foodborne pathogens. The application of irradiation as a food safety treatment has been widely endorsed by many of the major associations involved with food safety and public health. Despite these endorsements there has been very little use of this technology to date for reducing the disease burden associated with the consumption of these products. A review of the available literature since the passage of the 1996 Food Quality Protection Act was conducted on the barriers to implementing irradiation as a food safety process for fresh fruits and vegetables. The impediments to adopting widespread utilization of irradiation food processing as a food safety measure involve a complex array of legislative, regulatory, industry, and consumer issues. The FDA’s approval process limits the expansion of the list of foods approved for the application of irradiation as a food safety process. There is also a lack of capacity within the industry to meet the needs of a geographically dispersed industry.^
Resumo:
Public participation is an integral part of Environmental Impact Assessment (EIA), and as such, has been incorporated into regulatory norms. Assessment of the effectiveness of public participation has remained elusive however. This is partly due to the difficulty in identifying appropriate effectiveness criteria. This research uses Q methodology to discover and analyze stakeholder's social perspectives of the effectiveness of EIAs in the Western Cape, South Africa. It considers two case studies (Main Road and Saldanha Bay EIAs) for contextual participant perspectives of the effectiveness based on their experience. It further considers the more general opinion of provincial consent regulator staff at the Department of Environmental Affairs and the Department of Planning (DEA&DP). Two main themes of investigation are drawn from the South African National Environmental Management Act imperative for effectiveness: firstly, the participation procedure, and secondly, the stakeholder capabilities necessary for effective participation. Four theoretical frameworks drawn from planning, politics and EIA theory are adapted to public participation and used to triangulate the analysis and discussion of the revealed social perspectives. They consider citizen power in deliberation, Habermas' preconditions for the Ideal Speech Situation (ISS), a Foucauldian perspective of knowledge, power and politics, and a Capabilities Approach to public participation effectiveness. The empirical evidence from this research shows that the capacity and contextual constraints faced by participants demand the legislative imperatives for effective participation set out in the NEMA. The implementation of effective public participation has been shown to be a complex, dynamic and sometimes nebulous practice. The functional level of participant understanding of the process was found to be significantly wide-ranging with consequences of unequal and dissatisfied stakeholder engagements. Furthermore, the considerable variance of stakeholder capabilities in the South African social context, resulted in inequalities in deliberation. The social perspectives revealed significant differences in participant experience in terms of citizen power in deliberation. The ISS preconditions are highly contested in both the Saldanha EIA case study and the DEA&DP social perspectives. Only one Main Road EIA case study social perspective considered Foucault's notion of governmentality as a reality in EIA public participation. The freedom of control of ones environment, based on a Capabilities approach, is a highly contested notion. Although agreed with in principle, all of the social perspectives indicate that contextual and capacity realities constrain its realisation. This research has shown that Q method can be applied to EIA public participation in South Africa and, with the appropriate research or monitoring applications it could serve as a useful feedback tool to inform best practice public participation.
Resumo:
The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes
Resumo:
Swarm colonies reproduce social habits. Working together in a group to reach a predefined goal is a social behaviour occurring in nature. Linear optimization problems have been approached by different techniques based on natural models. In particular, Particles Swarm optimization is a meta-heuristic search technique that has proven to be effective when dealing with complex optimization problems. This paper presents and develops a new method based on different penalties strategies to solve complex problems. It focuses on the training process of the neural networks, the constraints and the election of the parameters to ensure successful results and to avoid the most common obstacles when searching optimal solutions.
Resumo:
The door-closing process can reinforce the impression of a solid, rock-proof, car body or of a rather cheap, flimsy vehicle. As there are no real prototypes during rubber profile bidding-out stages, engineers need to carry out non-linear numerical simulations that involve complex phenomena as well as static and dynamic loads for several profile candidates. This paper presents a structured virtual design tool based on FEM, including constitutive laws and incompressibility constraints allowing to predict more realistically the final closing forces and even to estimate sealing overpressure as an additional guarantee of noise insulation. Comparisons with results of physical tests are performed.
Resumo:
Use of computational fluid dynamic (CFD) methods to predict the power production from wind entire wind farms in flat and complex terrain is presented in this paper. Two full 3D Navier–Stokes solvers for incompressible flow are employed that incorporate the k–ε and k–ω turbulence models respectively. The wind turbines (W/Ts) are modelled as momentum absorbers by means of their thrust coefficient using the actuator disk approach. The WT thrust is estimated using the wind speed one diameter upstream of the rotor at hub height. An alternative method that employs an induction-factor based concept is also tested. This method features the advantage of not utilizing the wind speed at a specific distance from the rotor disk, which is a doubtful approximation when a W/T is located in the wake of another and/or the terrain is complex. To account for the underestimation of the near wake deficit, a correction is introduced to the turbulence model. The turbulence time scale is bounded using the general “realizability” constraint for the turbulent velocities. Application is made on two wind farms, a five-machine one located in flat terrain and another 43-machine one located in complex terrain. In the flat terrain case, the combination of the induction factor method along with the turbulence correction provides satisfactory results. In the complex terrain case, there are some significant discrepancies with the measurements, which are discussed. In this case, the induction factor method does not provide satisfactory results.
Resumo:
The main problem of pedestrian dead-reckoning (PDR) using only a body-attached inertial measurement unit is the accumulation of heading errors. The heading provided by magnetometers in indoor buildings is in general not reliable and therefore it is commonly not used. Recently, a new method was proposed called heuristic drift elimination (HDE) that minimises the heading error when navigating in buildings. It assumes that the majority of buildings have their corridors parallel to each other, or they intersect at right angles, and consequently most of the time the person walks along a straight path with a heading constrained to one of the four possible directions. In this article we study the performance of HDE-based methods in complex buildings, i.e. with pathways also oriented at 45°, long curved corridors, and wide areas where non-oriented motion is possible. We explain how the performance of the original HDE method can be deteriorated in complex buildings, and also, how severe errors can appear in the case of false matches with the building's dominant directions. Although magnetic compassing indoors has a chaotic behaviour, in this article we analyse large data-sets in order to study the potential use that magnetic compassing has to estimate the absolute yaw angle of a walking person. Apart from these analysis, this article also proposes an improved HDE method called Magnetically-aided Improved Heuristic Drift Elimination (MiHDE), that is implemented over a PDR framework that uses foot-mounted inertial navigation with an extended Kalman filter (EKF). The EKF is fed with the MiHDE-estimated orientation error, gyro bias corrections, as well as the confidence over that corrections. We experimentally evaluated the performance of the proposed MiHDE-based PDR method, comparing it with the original HDE implementation. Results show that both methods perform very well in ideal orthogonal narrow-corridor buildings, and MiHDE outperforms HDE for non-ideal trajectories (e.g. curved paths) and also makes it robust against potential false dominant direction matchings.
Resumo:
Motivated by the observation of spiral patterns in a wide range of physical, chemical, and biological systems, we present an automated approach that aims at characterizing quantitatively spiral-like elements in complex stripelike patterns. The approach provides the location of the spiral tip and the size of the spiral arms in terms of their arc length and their winding number. In addition, it yields the number of pattern components (Betti number of order 1), as well as their size and certain aspects of their shape. We apply the method to spiral defect chaos in thermally driven Rayleigh- Bénard convection and find that the arc length of spirals decreases monotonically with decreasing Prandtl number of the fluid and increasing heating. By contrast, the winding number of the spirals is nonmonotonic in the heating. The distribution function for the number of spirals is significantly narrower than a Poisson distribution. The distribution function for the winding number shows approximately an exponential decay. It depends only weakly on the heating, but strongly on the Prandtl number. Large spirals arise only for larger Prandtl numbers. In this regime the joint distribution for the spiral length and the winding number exhibits a three-peak structure, indicating the dominance of Archimedean spirals of opposite sign and relatively straight sections. For small Prandtl numbers the distribution function reveals a large number of small compact pattern components.
Resumo:
Resumen El diseño de sistemas ópticos, entendido como un arte por algunos, como una ciencia por otros, se ha realizado durante siglos. Desde los egipcios hasta nuestros días los sistemas de formación de imagen han ido evolucionando así como las técnicas de diseño asociadas. Sin embargo ha sido en los últimos 50 años cuando las técnicas de diseño han experimentado su mayor desarrollo y evolución, debido, en parte, a la aparición de nuevas técnicas de fabricación y al desarrollo de ordenadores cada vez más potentes que han permitido el cálculo y análisis del trazado de rayos a través de los sistemas ópticos de forma rápida y eficiente. Esto ha propiciado que el diseño de sistemas ópticos evolucione desde los diseños desarrollados únicamente a partir de la óptica paraxial hasta lo modernos diseños realizados mediante la utilización de diferentes técnicas de optimización multiparamétrica. El principal problema con el que se encuentra el diseñador es que las diferentes técnicas de optimización necesitan partir de un diseño inicial el cual puede fijar las posibles soluciones. Dicho de otra forma, si el punto de inicio está lejos del mínimo global, o diseño óptimo para las condiciones establecidas, el diseño final puede ser un mínimo local cerca del punto de inicio y lejos del mínimo global. Este tipo de problemática ha llevado al desarrollo de sistemas globales de optimización que cada vez sean menos sensibles al punto de inicio de la optimización. Aunque si bien es cierto que es posible obtener buenos diseños a partir de este tipo de técnicas, se requiere de muchos intentos hasta llegar a la solución deseada, habiendo un entorno de incertidumbre durante todo el proceso, puesto que no está asegurado el que se llegue a la solución óptima. El método de las Superficies Múltiples Simultaneas (SMS), que nació como una herramienta de cálculo de concentradores anidólicos, se ha demostrado como una herramienta también capaz utilizarse para el diseño de sistemas ópticos formadores de imagen, aunque hasta la fecha se ha utilizado para el diseño puntual de sistemas de formación de imagen. Esta tesis tiene por objeto presentar el SMS como un método que puede ser utilizado de forma general para el diseño de cualquier sistema óptico de focal fija o v afocal con un aumento definido así como una herramienta que puede industrializarse para ayudar al diseñador a afrontar de forma sencilla el diseño de sistemas ópticos complejos. Esta tesis está estructurada en cinco capítulos: El capítulo 1, es un capítulo de fundamentos donde se presentan los conceptos fundamentales necesarios para que el lector, aunque no posea una gran base en óptica formadora de imagen, pueda entender los planteamientos y resultados que se presentan en el resto de capítulos El capitulo 2 aborda el problema de la optimización de sistemas ópticos, donde se presenta el método SMS como una herramienta idónea para obtener un punto de partida para el proceso de optimización. Mediante un ejemplo aplicado se demuestra la importancia del punto de partida utilizado en la solución final encontrada. Además en este capítulo se presentan diferentes técnicas que permiten la interpolación y optimización de las superficies obtenidas a partir de la aplicación del SMS. Aunque en esta tesis se trabajará únicamente utilizando el SMS2D, se presenta además un método para la interpolación y optimización de las nubes de puntos obtenidas a partir del SMS3D basado en funciones de base radial (RBF). En el capítulo 3 se presenta el diseño, fabricación y medidas de un objetivo catadióptrico panorámico diseñado para trabajar en la banda del infrarrojo lejano (8-12 μm) para aplicaciones de vigilancia perimetral. El objetivo presentado se diseña utilizando el método SMS para tres frentes de onda de entrada utilizando cuatro superficies. La potencia del método de diseño utilizado se hace evidente en la sencillez con la que este complejo sistema se diseña. Las imágenes presentadas demuestran cómo el prototipo desarrollado cumple a la perfección su propósito. El capítulo 4 aborda el problema del diseño de sistemas ópticos ultra compactos, se introduce el concepto de sistemas multicanal, como aquellos sistemas ópticos compuestos por una serie de canales que trabajan en paralelo. Este tipo de sistemas resultan particularmente idóneos para él diseño de sistemas afocales. Se presentan estrategias de diseño para sistemas multicanal tanto monocromáticos como policromáticos. Utilizando la novedosa técnica de diseño que en este capítulo se presenta el diseño de un telescopio de seis aumentos y medio. En el capítulo 5 se presenta una generalización del método SMS para rayos meridianos. En este capítulo se presenta el algoritmo que debe utilizarse para el diseño de cualquier sistema óptico de focal fija. La denominada optimización fase 1 se vi introduce en el algoritmo presentado de forma que mediante el cambio de las condiciones iníciales del diseño SMS que, aunque el diseño se realice para rayos meridianos, los rayos skew tengan un comportamiento similar. Para probar la potencia del algoritmo desarrollado se presenta un conjunto de diseños con diferente número de superficies. La estabilidad y potencia del algoritmo se hace evidente al conseguirse por primera vez el diseño de un sistema de seis superficies diseñado por SMS. vii Abstract The design of optical systems, considered an art by some and a science by others, has been developed for centuries. Imaging optical systems have been evolving since Ancient Egyptian times, as have design techniques. Nevertheless, the most important developments in design techniques have taken place over the past 50 years, in part due to the advances in manufacturing techniques and the development of increasingly powerful computers, which have enabled the fast and efficient calculation and analysis of ray tracing through optical systems. This has led to the design of optical systems evolving from designs developed solely from paraxial optics to modern designs created by using different multiparametric optimization techniques. The main problem the designer faces is that the different optimization techniques require an initial design which can set possible solutions as a starting point. In other words, if the starting point is far from the global minimum or optimal design for the set conditions, the final design may be a local minimum close to the starting point and far from the global minimum. This type of problem has led to the development of global optimization systems which are increasingly less sensitive to the starting point of the optimization process. Even though it is possible to obtain good designs from these types of techniques, many attempts are necessary to reach the desired solution. This is because of the uncertain environment due to the fact that there is no guarantee that the optimal solution will be obtained. The Simultaneous Multiple Surfaces (SMS) method, designed as a tool to calculate anidolic concentrators, has also proved useful for the design of image-forming optical systems, although until now it has occasionally been used for the design of imaging systems. This thesis aims to present the SMS method as a technique that can be used in general for the design of any optical system, whether with a fixed focal or an afocal with a defined magnification, and also as a tool that can be commercialized to help designers in the design of complex optical systems. The thesis is divided into five chapters. Chapter 1 establishes the basics by presenting the fundamental concepts which the reader needs to acquire, even if he/she doesn‟t have extensive knowledge in the field viii of image-forming optics, in order to understand the steps taken and the results obtained in the following chapters. Chapter 2 addresses the problem of optimizing optical systems. Here the SMS method is presented as an ideal tool to obtain a starting point for the optimization process. The importance of the starting point for the final solution is demonstrated through an example. Additionally, this chapter introduces various techniques for the interpolation and optimization of the surfaces obtained through the application of the SMS method. Even though in this thesis only the SMS2D method is used, we present a method for the interpolation and optimization of clouds of points obtained though the SMS3D method, based on radial basis functions (RBF). Chapter 3 presents the design, manufacturing and measurement processes of a catadioptric panoramic lens designed to work in the Long Wavelength Infrared (LWIR) (8-12 microns) for perimeter surveillance applications. The lens presented is designed by using the SMS method for three input wavefronts using four surfaces. The powerfulness of the design method used is revealed through the ease with which this complex system is designed. The images presented show how the prototype perfectly fulfills its purpose. Chapter 4 addresses the problem of designing ultra-compact optical systems. The concept of multi-channel systems, such as optical systems composed of a series of channels that work in parallel, is introduced. Such systems are especially suitable for the design of afocal systems. We present design strategies for multichannel systems, both monochromatic and polychromatic. A telescope designed with a magnification of six-and-a-half through the innovative technique exposed in this chapter is presented. Chapter 5 presents a generalization of the SMS method for meridian rays. The algorithm to be used for the design of any fixed focal optics is revealed. The optimization known as phase 1 optimization is inserted into the algorithm so that, by changing the initial conditions of the SMS design, the skew rays have a similar behavior, despite the design being carried out for meridian rays. To test the power of the developed algorithm, a set of designs with a different number of surfaces is presented. The stability and strength of the algorithm become apparent when the first design of a system with six surfaces if obtained through the SMS method.
Resumo:
Enabling Subject Matter Experts (SMEs) to formulate knowledge without the intervention of Knowledge Engineers (KEs) requires providing SMEs with methods and tools that abstract the underlying knowledge representation and allow them to focus on modeling activities. Bridging the gap between SME-authored models and their representation is challenging, especially in the case of complex knowledge types like processes, where aspects like frame management, data, and control flow need to be addressed. In this paper, we describe how SME-authored process models can be provided with an operational semantics and grounded in a knowledge representation language like F-logic in order to support process-related reasoning. The main results of this work include a formalism for process representation and a mechanism for automatically translating process diagrams into executable code following such formalism. From all the process models authored by SMEs during evaluation 82% were well-formed, all of which executed correctly. Additionally, the two optimizations applied to the code generation mechanism produced a performance improvement at reasoning time of 25% and 30% with respect to the base case, respectively.
Resumo:
Dominance measuring methods are a new approach to deal with complex decision-making problems with imprecise information. These methods are based on the computation of pairwise dominance values and exploit the information in the dominance matrix in dirent ways to derive measures of dominance intensity and rank the alternatives under consideration. In this paper we propose a new dominance measuring method to deal with ordinal information about decision-maker preferences in both weights and component utilities. It takes advantage of the centroid of the polytope delimited by ordinal information and builds triangular fuzzy numbers whose distances to the crisp value 0 constitute the basis for the de?nition of a dominance intensity measure. Monte Carlo simulation techniques have been used to compare the performance of this method with other existing approaches.