922 resultados para selection model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by these difficulties, Castillo et al. (2012) made some suggestions on how to build consistent stochastic models avoiding the selection of easy to use mathematical functions, which were replaced by those resulting from a set of properties to be satisfied by the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a knowledge model for a configuration problem in the do-main of traffic control. The goal of this model is to help traffic engineers in the dynamic selection of a set of messages to be presented to drivers on variable message signals. This selection is done in a real-time context using data recorded by traffic detectors on motorways. The system follows an advanced knowledge-based solution that implements two abstract problem solving methods according to a model-based approach recently proposed in the knowledge engineering field. Finally, the paper presents a discussion about the advantages and drawbacks found for this problem as a consequence of the applied knowledge modeling ap-proach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to expose the importance of observing cultural systems present in a territory as a reference for the design of urban infrastructures in the new cities and regions of rapid development. If we accept the idea that architecture is an instrument or cultural system developed by man to act as an intermediary to the environment, it is necessary to understand the elemental interaction between man and his environment to meet a satisfactory design. To illustrate this purpose, we present the case of the Eurasian Mediterranean region, where the architectural culture acts as a cultural system of adaptation to the environment and it is formed by an ancient process of selection. From simple observation of architectural types, construction systems and environmental mechanisms treasured in mediterranean historical heritage we can extract crucial information about this elemental interaction. Mediterranean architectural culture has environmental mechanisms responding to the needs of basics habitability, ethnics and passive conditioning. These mechanisms can be basis of an innovative design without compromising the diversity and lifestyles of human groups in the region. The main fundament of our investigation is the determination of the historical heritage of domestic architecture as holder of the formation process of these mechanisms. The result allows us to affirm that the successful introduction of new urban infrastructures in an area need a reliable reference and it must be a cultural system that entailing in essence the environmental conditioning of human existence. The urban infrastructures must be sustainable, understood and accepted by the inhabitants. The last condition is more important when the urban infrastructures are implemented in areas that are developing rapidly or when there is no architectural culture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the memristor was first built in 2008 at HP Labs, no end of devices and models have been presented. Also, new applications appear frequently. However, the integration of the device at the circuit level is not straightforward, because available models are still immature and/or suppose high computational loads, making their simulation long and cumbersome. This study assists circuit/systems designers in the integration of memristors in their applications, while aiding model developers in the validation of their proposals. We introduce the use of a memristor application framework to support the work of both the model developer and the circuit designer. First, the framework includes a library with the best-known memristor models, being easily extensible with upcoming models. Systematic modifications have been applied to these models to provide better convergence and significant simulations speedups. Second, a quick device simulator allows the study of the response of the models under different scenarios, helping the designer with the stimuli and operation time selection. Third, fine tuning of the device including parameters variations and threshold determination is also supported. Finally, SPICE/Spectre subcircuit generation is provided to ease the integration of the devices in application circuits. The framework provides the designer with total control overconvergence, computational load, and the evolution of system variables, overcoming usual problems in the integration of memristive devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, there is an uprising social pressure on big companies to incorporate into their decision-making process elements of the so-called social responsibility. Among the many implications of this fact, one relevant one is the need to include this new element in classic portfolio selection models. This paper meets this challenge by formulating a model that combines goal programming with "goal games" against nature in a scenario where the social responsibility is defined through the introduction of a battery of sustainability indicators amalgamated into a synthetic index. In this way, we have obtained an efficient model that only implies solving a small number of linear programming problems. The proposed approach has been tested and illustrated by using a case study related to the selection of securities in international markets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD dissertation is framed in the emergent fields of Reverse Logistics and ClosedLoop Supply Chain (CLSC) management. This subarea of supply chain management has gained researchers and practitioners' attention over the last 15 years to become a fully recognized subdiscipline of the Operations Management field. More specifically, among all the activities that are included within the CLSC area, the focus of this dissertation is centered in direct reuse aspects. The main contribution of this dissertation to current knowledge is twofold. First, a framework for the so-called reuse CLSC is developed. This conceptual model is grounded in a set of six case studies conducted by the author in real industrial settings. The model has also been contrasted with existing literature and with academic and professional experts on the topic as well. The framework encompasses four building blocks. In the first block, a typology for reusable articles is put forward, distinguishing between Returnable Transport Items (RTI), Reusable Packaging Materials (RPM), and Reusable Products (RP). In the second block, the common characteristics that render reuse CLSC difficult to manage from a logistical standpoint are identified, namely: fleet shrinkage, significant investment and limited visibility. In the third block, the main problems arising in the management of reuse CLSC are analyzed, such as: (1) define fleet size dimension, (2) control cycle time and promote articles rotation, (3) control return rate and prevent shrinkage, (4) define purchase policies for new articles, (5) plan and control reconditioning activities, and (6) balance inventory between depots. Finally, in the fourth block some solutions to those issues are developed. Firstly, problems (2) and (3) are addressed through the comparative analysis of alternative strategies for controlling cycle time and return rate. Secondly, a methodology for calculating the required fleet size is elaborated (problem (1)). This methodology is valid for different configurations of the physical flows in the reuse CLSC. Likewise, some directions are pointed out for further development of a similar method for defining purchase policies for new articles (problem (4)). The second main contribution of this dissertation is embedded in the solutions part (block 4) of the conceptual framework and comprises a two-level decision problem integrating two mixed integer linear programming (MILP) models that have been formulated and solved to optimality using AIMMS as modeling language, CPLEX as solver and Excel spreadsheet for data introduction and output presentation. The results obtained are analyzed in order to measure in a client-supplier system the economic impact of two alternative control strategies (recovery policies) in the context of reuse. In addition, the models support decision-making regarding the selection of the appropriate recovery policy against the characteristics of demand pattern and the structure of the relevant costs in the system. The triangulation of methods used in this thesis has enabled to address the same research topic with different approaches and thus, the robustness of the results obtained is strengthened.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Present research is framed within the project MODIFICA (MODelo predictivo - edIFIcios - Isla de Calor urbanA) aimed at developing a predictive model for dwelling energy performance under the urban heat island effect in order to implement it in the evaluation of real energy demand and consumption of dwellings as well as in the selection of energy retrofitting strategies. It is funded by Programa de I+D+i orientada a los retos de la sociedad 'Retos Investigación' 2013. Despite great advances on building energy performance have been achieved during the last years, available climate data is derived from weather stations placed in the outskirts of the city. Hence, urban heat island effect is not considered in energy simulations, which implies an important lack of accuracy. Since 1980's several international studies have been conducted on the urban heat island (UHI) phenomena, which modifies the atmospheric conditions of the urban centres due to urban agglomeration [1][2][3][4]. In the particular case of Madrid, multiple maps haven been generated using different methodologies during the last two decades [5][6][7]. These maps allow us to study the UHI phenomena from a wide perspective, offering however an static representation of it. Consequently a dynamic model for Madrid UHI is proposed, in order to evaluate it in a continuous way, and to be able to integrate it in building energy simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An evolutionary process is simulated with a simple spin-glass-like model of proteins to examine the origin of folding ability. At each generation, sequences are randomly mutated and subjected to a simulation of the folding process based on the model. According to the frequency of local configurations at the active sites, sequences are selected and passed to the next generation. After a few hundred generations, a sequence capable of folding globally into a native conformation emerges. Moreover, the selected sequence has a distinct energy minimum and an anisotropic funnel on the energy surface, which are the imperative features for fast folding of proteins. The proposed model reveals that the functional selection on the local configurations leads a sequence to fold globally into a conformation at a faster rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Widespread interest in producing transgenic organisms is balanced by concern over ecological hazards, such as species extinction if such organisms were to be released into nature. An ecological risk associated with the introduction of a transgenic organism is that the transgene, though rare, can spread in a natural population. An increase in transgene frequency is often assumed to be unlikely because transgenic organisms typically have some viability disadvantage. Reduced viability is assumed to be common because transgenic individuals are best viewed as macromutants that lack any history of selection that could reduce negative fitness effects. However, these arguments ignore the potential advantageous effects of transgenes on some aspect of fitness such as mating success. Here, we examine the risk to a natural population after release of a few transgenic individuals when the transgene trait simultaneously increases transgenic male mating success and lowers the viability of transgenic offspring. We obtained relevant life history data by using the small cyprinodont fish, Japanese medaka (Oryzias latipes) as a model. Our deterministic equations predict that a transgene introduced into a natural population by a small number of transgenic fish will spread as a result of enhanced mating advantage, but the reduced viability of offspring will cause eventual local extinction of both populations. Such risks should be evaluated with each new transgenic animal before release.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A minimal hypothesis is proposed concerning the brain processes underlying effortful tasks. It distinguishes two main computational spaces: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative, and attentional processors. Workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice. They selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously coactivated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. We outline predictions for spatio-temporal activation patterns during brain imaging, particularly about the contribution of dorsolateral prefrontal cortex and anterior cingulate to the workspace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequence-selective transcription by bacterial RNA polymerase (RNAP) requires σ factor that participates in both promoter recognition and DNA melting. RNAP lacking σ (core enzyme) will initiate RNA synthesis from duplex ends, nicks, gaps, and single-stranded regions. We have used DNA templates containing short regions of heteroduplex (bubbles) to compare initiation in the presence and absence of various σ factors. Using bubble templates containing the σD-dependent flagellin promoter, with or without its associated upstream promoter (UP) element, we demonstrate that UP element stimulation occurs efficiently even in the absence of σ. This supports a model in which the UP element acts primarily through the α subunit of core enzyme to increase the initial association of RNAP with the promoter. Core and holoenzyme do differ substantially in the template positions chosen for initiation: σD restricts initiation to sites 8–9 nucleotides downstream of the conserved −10 element. Remarkably, σA also has a dramatic effect on start-site selection even though the σA holoenzyme is inactive on the corresponding homoduplexes. The start sites chosen by the σA holoenzyme are located 8 nucleotides downstream of sequences on the nontemplate strand that resemble the conserved −10 hexamer recognized by σA. Thus, σA appears to recognize the −10 region even in a single-stranded state. We propose that in addition to its described roles in promoter recognition and start-site melting, σ also localizes the transcription start site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The genotypic proportions for major histocompatibility complex loci, HLA-A and HLA-B, of progeny in families in 23 South Amerindian tribes in which segregation for homozygotes and heterozygotes could occur are examined. Overall, there is a large deficiency of homozygotes compared with Mendelian expectations (for HLA-A, 114 observed and 155.50 expected and for HLA-B 110 observed and 144.75 expected), consistent with strong balancing selection favoring heterozygotes. There is no evidence that these deficiencies were associated with particular alleles or with the age of the individuals sampled. When these families were divided into four mating types, there was strong selection against homozygotes, averaging 0.462 for three of the mating types over the two loci. For the other mating type in which the female parent is homozygous and shares one allele with the heterozygous male parent, there was no evidence of selection against homozygotes. A theoretical model incorporating these findings surprisingly does not result in a stable polymorphism for two alleles but does result in an excess of heterozygotes and a minimum fitness at intermediate allele frequencies. However, for more than two alleles, balancing selection does occur and the model approaches the qualities of the symmetrical heterozygote advantage model as the number of alleles increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Immature CD4+CD8+ thymocytes expressing T-cell antigen receptors (TCR) are selected by TCR-mediated recognition of peptides associated with major histocompatibility complex molecules on thymic stromal cells. Selection ensures reactivity of the mature cells to foreign antigens and tolerance to self. Although much has been learned about the factors that determine whether a thymocyte with a given specificity will be positively or negatively selected, selection as an aspect of the developmental process as a whole is less well-understood. Here we invoke a model in which thymocytes tune their response characteristics individually and dynamically in the course of development. Cellular development and selection are driven by receptor-mediated metabolic perturbations. Perturbation is a measure of the net intracellular change induced by external stimulation. It results from the integration of several signals and countersignals over time and therefore depends on the environment and the maturation stage of the cell. Individual cell adaptation limits the range of perturbations. Such adaptation renders thymocytes less sensitive to the level of stimulation per se, but responsive to environmental changes in that level. This formulation begins to explain the mechanisms that link developmental and selection events to each other.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spermatogonial stem cell initiates and maintains spermatogenesis in the testis. To perform this role, the stem cell must self replicate as well as produce daughter cells that can expand and differentiate to form spermatozoa. Despite the central importance of the spermatogonial stem cell to male reproduction, little is known about its morphological or biochemical characteristics. This results, in part, from the fact that spermatogonial stem cells are an extremely rare cell population in the testis, and techniques for their enrichment are just beginning to be established. In this investigation, we used a multiparameter selection strategy, combining the in vivo cryptorchid testis model with in vitro fluorescence-activated cell sorting analysis. Cryptorchid testis cells were fractionated by fluorescence-activated cell sorting analysis based on light-scattering properties and expression of the cell surface molecules α6-integrin, αv-integrin, and the c-kit receptor. Two important observations emerged from these analyses. First, spermatogonial stem cells from the adult cryptorchid testis express little or no c-kit. Second, the most effective enrichment strategy, in this study, selected cells with low side scatter light-scattering properties, positive staining for α6-integrin, and negative or low αv-integrin expression, and resulted in a 166-fold enrichment of spermatogonial stem cells. Identification of these characteristics will allow further purification of these valuable cells and facilitate the investigation of molecular mechanisms governing spermatogonial stem cell self renewal and hierarchical differentiation.