987 resultados para scale selection
Resumo:
This dissertation presents the competitive control methodologies for small-scale power system (SSPS). A SSPS is a collection of sources and loads that shares a common network which can be isolated during terrestrial disturbances. Micro-grids, naval ship electric power systems (NSEPS), aircraft power systems and telecommunication system power systems are typical examples of SSPS. The analysis and development of control systems for small-scale power systems (SSPS) lacks a defined slack bus. In addition, a change of a load or source will influence the real time system parameters of the system. Therefore, the control system should provide the required flexibility, to ensure operation as a single aggregated system. In most of the cases of a SSPS the sources and loads must be equipped with power electronic interfaces which can be modeled as a dynamic controllable quantity. The mathematical formulation of the micro-grid is carried out with the help of game theory, optimal control and fundamental theory of electrical power systems. Then the micro-grid can be viewed as a dynamical multi-objective optimization problem with nonlinear objectives and variables. Basically detailed analysis was done with optimal solutions with regards to start up transient modeling, bus selection modeling and level of communication within the micro-grids. In each approach a detail mathematical model is formed to observe the system response. The differential game theoretic approach was also used for modeling and optimization of startup transients. The startup transient controller was implemented with open loop, PI and feedback control methodologies. Then the hardware implementation was carried out to validate the theoretical results. The proposed game theoretic controller shows higher performances over traditional the PI controller during startup. In addition, the optimal transient surface is necessary while implementing the feedback controller for startup transient. Further, the experimental results are in agreement with the theoretical simulation. The bus selection and team communication was modeled with discrete and continuous game theory models. Although players have multiple choices, this controller is capable of choosing the optimum bus. Next the team communication structures are able to optimize the players’ Nash equilibrium point. All mathematical models are based on the local information of the load or source. As a result, these models are the keys to developing accurate distributed controllers.
Resumo:
With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^
Resumo:
To evaluate these factors, the scale of perceived influences on the election of a specialized plan of studies, IPEP, was built and in this study its factorial structure and reliability were evaluated. The instrument was applied to 115 students, chosen by quote sampling. They were students of a subsidized private secondary school from the city of Chillan, Chile. An exploratory factorial analysis identified eight factors in the scale IPEP: Academic projection, Personal development, Maintenance of the social environment, Academic requirements, Satisfaction of others' expectations, Vocational information, Image of the plan and Family pressure. The results present an initial factorial structure, that empirically and theoretically adequate, whose factors were reliable and conceptually useful. These factors distinguish psychogenic and sociogenic aspects of the vocational process, and allow to initiate diagnostic and investigative actions in relation to this early vocational election that is established in secondary schools of the Chilean educational system
Resumo:
To evaluate these factors, the scale of perceived influences on the election of a specialized plan of studies, IPEP, was built and in this study its factorial structure and reliability were evaluated. The instrument was applied to 115 students, chosen by quote sampling. They were students of a subsidized private secondary school from the city of Chillan, Chile. An exploratory factorial analysis identified eight factors in the scale IPEP: Academic projection, Personal development, Maintenance of the social environment, Academic requirements, Satisfaction of others' expectations, Vocational information, Image of the plan and Family pressure. The results present an initial factorial structure, that empirically and theoretically adequate, whose factors were reliable and conceptually useful. These factors distinguish psychogenic and sociogenic aspects of the vocational process, and allow to initiate diagnostic and investigative actions in relation to this early vocational election that is established in secondary schools of the Chilean educational system
Resumo:
To evaluate these factors, the scale of perceived influences on the election of a specialized plan of studies, IPEP, was built and in this study its factorial structure and reliability were evaluated. The instrument was applied to 115 students, chosen by quote sampling. They were students of a subsidized private secondary school from the city of Chillan, Chile. An exploratory factorial analysis identified eight factors in the scale IPEP: Academic projection, Personal development, Maintenance of the social environment, Academic requirements, Satisfaction of others' expectations, Vocational information, Image of the plan and Family pressure. The results present an initial factorial structure, that empirically and theoretically adequate, whose factors were reliable and conceptually useful. These factors distinguish psychogenic and sociogenic aspects of the vocational process, and allow to initiate diagnostic and investigative actions in relation to this early vocational election that is established in secondary schools of the Chilean educational system
Resumo:
The consumption of melon (Cucumis melo L.) has been, until several years ago, regional, seasonal and without commercial interest. Recent commercial changes and world wide transportation have changed this situation. Melons from 3 different ripeness stages at harvest and 7 cold storage periods have been analysed by destructive and non destructive tests. Chemical, physical, mechanical (non destructive impact, compression, skin puncture and Magness- Taylor) and sensory tests were carried out in order to select the best test to assess quality and to determine the optimal ripeness stage at harvest. Analysis of variance and Principal Component Analysis were performed to study the data. The mechanical properties based on non-destructive Impact and Compression can be used to monitor cold storage evolution. They can also be used at harvest to segregate the highest ripeness stage (41 days after anthesis DAA) in relation to less ripe stages (34 and 28 DAA).Only 34 and 41 DAA reach a sensory evaluation above 50 in a scale from 0-100.
Resumo:
Neighbourhood representation and scale used to measure the built environment have been treated in many ways. However, it is anything but clear what representation of neighbourhood is the most feasible in the existing literature. This paper presents an exhaustive analysis of built environment attributes through three spatial scales. For this purpose multiple data sources are integrated, and a set of 943 observations is analysed. This paper simultaneously analyses the influence of two methodological issues in the study of the relationship between built environment and travel behaviour: (1) detailed representation of neighbourhood by testing different spatial scales; (2) the influence of unobserved individual sensitivity to built environment attributes. The results show that different spatial scales of built environment attributes produce different results. Hence, it is important to produce local and regional transport measures, according to geographical scale. Additionally, the results show significant sensitivity to built environment attributes depending on place of residence. This effect, called residential sorting, acquires different magnitudes depending on the geographical scale used to measure the built environment attributes. Spatial scales risk to the stability of model results. Hence, transportation modellers and planners must take into account both effects of self-selection and spatial scales.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
Multiple-complete-digest mapping is a DNA mapping technique based on complete-restriction-digest fingerprints of a set of clones that provides highly redundant coverage of the mapping target. The maps assembled from these fingerprints order both the clones and the restriction fragments. Maps are coordinated across three enzymes in the examples presented. Starting with yeast artificial chromosome contigs from the 7q31.3 and 7p14 regions of the human genome, we have produced cosmid-based maps spanning more than one million base pairs. Each yeast artificial chromosome is first subcloned into cosmids at a redundancy of ×15–30. Complete-digest fragments are electrophoresed on agarose gels, poststained, and imaged on a fluorescent scanner. Aberrant clones that are not representative of the underlying genome are rejected in the map construction process. Almost every restriction fragment is ordered, allowing selection of minimal tiling paths with clone-to-clone overlaps of only a few thousand base pairs. These maps demonstrate the practicality of applying the experimental and software-based steps in multiple-complete-digest mapping to a target of significant size and complexity. We present evidence that the maps are sufficiently accurate to validate both the clones selected for sequencing and the sequence assemblies obtained once these clones have been sequenced by a “shotgun” method.
Resumo:
Typical behavior of a two-locus genetic system experiencing cyclical selection, includes fixation (in one or both loci) or a stable polymorphic cycle with a period equal to that of environmental changes. By considering the time scale in terms of environmental periods, the last case could be trivially classified as a polymorphic stable point. Here we report on some results showing the complex limiting behavior of diploid population trajectories resulting from selection in a cyclically changing environment. We found that simple cyclical selection could produce genetic supercycles composed of many hundreds of environmental periods.
Resumo:
Natural selection is one of the most fundamental processes in biology. However, there is still a controversy over the importance of selection in microevolution of molecular traits. Despite the general lack of data most authors hold the view that selection on molecular characters may be important, but at lower rates than selection on most phenotypic traits. Here we present evidence that natural selection may contribute substantially to molecular variation on a scale of meters only. In populations of the marine snail Littorina saxatilis living on exposed rocky shores, steep microclines in allele frequencies between splash and surf zone groups are present in the enzyme aspartate aminotransferase (allozyme locus Aat; EC. 2.6.1.1). We followed one population over 7 years, including a period of strong natural perturbation. The surf zone part of the population dominated by the allele Aat100 was suddenly eliminated by a bloom of a toxin-producing microflagellate. Downshore migration of splash zone snails with predominantly Aat120 alleles resulted in a drastic increase in surf zone frequency of Aat120, from 0.4 to 0.8 over 2 years. Over the next four to six generations, however, the frequency of Aat120 returned to the original value. We estimated the coefficient of selection of Aat120 in the surf zone to be about 0.4. Earlier studies show similar or even sharper Aat clines in other countries. Thus, we conclude that microclinal selection is an important evolutionary force in this system.
Resumo:
In the current Information Age, data production and processing demands are ever increasing. This has motivated the appearance of large-scale distributed information. This phenomenon also applies to Pattern Recognition so that classic and common algorithms, such as the k-Nearest Neighbour, are unable to be used. To improve the efficiency of this classifier, Prototype Selection (PS) strategies can be used. Nevertheless, current PS algorithms were not designed to deal with distributed data, and their performance is therefore unknown under these conditions. This work is devoted to carrying out an experimental study on a simulated framework in which PS strategies can be compared under classical conditions as well as those expected in distributed scenarios. Our results report a general behaviour that is degraded as conditions approach to more realistic scenarios. However, our experiments also show that some methods are able to achieve a fairly similar performance to that of the non-distributed scenario. Thus, although there is a clear need for developing specific PS methodologies and algorithms for tackling these situations, those that reported a higher robustness against such conditions may be good candidates from which to start.
Resumo:
This paper extends previous analyses of the choice between internal and external R&D to consider the costs of internal R&D. The Heckman two-stage estimator is used to estimate the determinants of internal R&D unit cost (i.e. cost per product innovation) allowing for sample selection effects. Theory indicates that R&D unit cost will be influenced by scale issues and by the technological opportunities faced by the firm. Transaction costs encountered in research activities are allowed for and, in addition, consideration is given to issues of market structure which influence the choice of R&D mode without affecting the unit cost of internal or external R&D. The model is tested on data from a sample of over 500 UK manufacturing plants which have engaged in product innovation. The key determinants of R&D mode are the scale of plant and R&D input, and market structure conditions. In terms of the R&D cost equation, scale factors are again important and have a non-linear relationship with R&D unit cost. Specificities in physical and human capital also affect unit cost, but have no clear impact on the choice of R&D mode. There is no evidence of technological opportunity affecting either R&D cost or the internal/external decision.
Resumo:
Businesses are seen as the next stage in delivering biodiversity improvements linked to local and UK Biodiversity Action Plans. Global discussion of biodiversity continues to grow, with the Millennium Ecosystem Assessment, updates to the Convention on Biological Diversity and The Economics of Ecosystems and Biodiversity being published during the time of this project. These publications and others detail the importance of biodiversity protection and also the lack of strategies to deliver this at an operational level. Pressure on UK landholding businesses is combined with significant business opportunities associated with biodiversity engagement. However, the measurement and reporting of biodiversity by business is currently limited by the complexity of the term and the lack of suitable procedures for the selection of metrics. Literature reviews identified confusion surrounding biodiversity as a term, limited academic literature regarding business and choice of biodiversity indicators. The aim of the project was to develop a methodology to enable companies to identify, quantify and monitor biodiversity. Research case studies interviews were undertaken with 10 collaborating organisations, selected to represent =best practice‘ examples and various situations. Information gained through case studies was combined with that from existing literature. This was used to develop a methodology for the selection of biodiversity indicators for company landholdings. The indicator selection methodology was discussed during a second stage of case study interviews with 4 collaborating companies. The information and opinions gained during this research was used to modify the methodology and provide the final biodiversity indicator selection methodology. The methodology was then tested through implementation at a mineral extraction site operated by a multi-national aggregates company. It was found that the methodology was a suitable process for implementation of global and national systems and conceptual frameworks at the practitioner scale. Further testing of robustness by independent parties is recommended to improve the system.
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.