958 resultados para Process Modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Food processes must ensure safety and high-quality products for a growing demand consumer creating the need for better knowledge of its unit operations. The Computational Fluid Dynamics (CFD) has been widely used for better understanding the food thermal processes, and it is one of the safest and most frequently used methods for food preservation. However, there is no single study in the literature describing thermal process of liquid foods in a brick shaped package. The present study evaluated such process and the influence of its orientation on the process lethality. It demonstrated the potential of using CFD to evaluate thermal processes of liquid foods and the importance of rheological characterization and convection in thermal processing of liquid foods. It also showed that packaging orientation does not result in different sterilization values during thermal process of the evaluated fluids in the brick shaped package.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to investigate and model the water absorption process by corn kernels with different levels of mechanical damage Corn kernels of AG 1510 variety with moisture content of 14.2 (% d.b.) were used. Different mechanical damage levels were indirectly evaluated by electrical conductivity measurements. The absorption process was based on the industrial corn wet milling process, in which the product was soaked with a 0.2% sulfur dioxide (SO2) solution and 0.55% lactic acid (C3H6O3) in distilled water, under controlled temperatures of 40, 50, 60, and 70 ºC and different mechanical damage levels. The Peleg model was used for the analysis and modeling of water absorption process. The conclusion is that the structural changes caused by the mechanical damage to the corn kernels influenced the initial rates of water absorption, which were higher for the most damaged kernels, and they also changed the equilibrium moisture contents of the kernels. The Peleg model was well adjusted to the experimental data presenting satisfactory values for the analyzed statistic parameters for all temperatures regardless of the damage level of the corn kernels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The aim of this work was to evaluate a non-agitated process of bioethanol production from soybean molasses and the kinetic parameters of fermentation using a strain of Saccharomyces cerevisiae (ATCC® 2345). Kinetic experiment was conducted in medium with 30% (w v-1) of soluble solids without supplementation or pH adjustment. The maximum ethanol concentration was in 44 hours, the ethanol productivity was 0.946 g L-1 h-1, the yield over total initial sugars (Y1) was 47.87%, over consumed sugars (Y2) was 88.08% and specific cells production rate was 0.006 h-1. The mathematical polynomial was adjusted to the experimental data and provided very similar parameters of yield and productivity. Based in this study, for one ton of soybean molasses can be produced 103 kg of anhydrous bioethanol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building Information Modeling – BIM is widely spreading in the Architecture, Engineering, and Construction (AEC) industries. Manufacturers of building elements are also starting to provide more and more objects of their products. The ideal availability and distribution for these models is not yet stabilized. Usual goal of a manufacturer is to get their model into design as early as possible. Finding the ways to satisfy customer needs with a superior service would help to achieve this goal. This study aims to seek what case company’s customers want out of the model and what they think is the ideal way to obtain these models and what are the desired functionalities for this service. This master’s thesis uses a modified version of lead user method to gain understanding of what the needs are in a longer term. In this framework also benchmarking of current solutions and their common model functions is done. Empirical data is collected with survey and interviews. As a result this thesis provides understanding that what is the information customer uses when obtaining a model, what kind of model is expected to be achieved and how is should the process optimally function. Based on these results ideal service is pointed out.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial time series have a tendency of abruptly changing their behavior and maintain this behavior for several consecutive periods, and commodity futures returns are not an exception. This quality proposes that nonlinear models, as opposed to linear models, can more accurately describe returns and volatility. Markov regime switching models are able to match this behavior and have become a popular way to model financial time series. This study uses Markov regime switching model to describe the behavior of energy futures returns on a commodity level, because studies show that commodity futures are a heterogeneous asset class. The purpose of this thesis is twofold. First, determine how many regimes characterize individual energy commodities’ returns in different return frequencies. Second, study the characteristics of these regimes. We extent the previous studies on the subject in two ways: We allow for the possibility that the number of regimes may exceed two, as well as conduct the research on individual commodities rather than on commodity indices or subgroups of these indices. We use daily, weekly and monthly time series of Brent crude oil, WTI crude oil, natural gas, heating oil and gasoil futures returns over 1994–2014, where available, to carry out the study. We apply the likelihood ratio test to determine the sufficient number of regimes for each commodity and data frequency. Then the time series are modeled with Markov regime switching model to obtain the return distribution characteristics of each regime, as well as the transition probabilities of moving between regimes. The results for the number of regimes suggest that daily energy futures return series consist of three to six regimes, whereas weekly and monthly returns for all energy commodities display only two regimes. When the number of regimes exceeds two, there is a tendency for the time series of energy commodities to form groups of regimes. These groups are usually quite persistent as a whole because probability of a regime switch inside the group is high. However, individual regimes in these groups are not persistent and the process oscillates between these regimes frequently. Regimes that are not part of any group are generally persistent, but show low ergodic probability, i.e. rarely prevail in the market. This study also suggests that energy futures return series characterized with two regimes do not necessarily display persistent bull and bear regimes. In fact, for the majority of time series, bearish regime is considerably less persistent. Rahoituksen aikasarjoilla on taipumus arvaamattomasti muuttaa käyttäytymistään ja jatkaa tätä uutta käyttäytymistä useiden periodien ajan, eivätkä hyödykefutuurien tuotot tee tähän poikkeusta. Tämän ominaisuuden johdosta lineaaristen mallien sijasta epälineaariset mallit pystyvät tarkemmin kuvailemaan esimerkiksi tuottojen jakauman parametreja. Markov regiiminvaihtomallit pystyvät vangitsemaan tämän ominaisuuden ja siksi niistä on tullut suosittuja rahoituksen aikasarjojen mallintamisessa. Tämä tutkimus käyttää Markov regiiminvaihtomallia kuvaamaan yksittäisten energiafutuurien tuottojen käyttäytymistä, sillä tutkimukset osoittavat hyödykefutuurien olevan hyvin heterogeeninen omaisuusluokka. Tutkimuksen tarkoitus on selvittää, kuinka monta regiimiä tarvitaan kuvaamaan energiafutuurien tuottoja eri tuottofrekvensseillä ja mitkä ovat näiden regiimien ominaisuudet. Aiempaa tutkimusta aiheesta laajennetaan määrittämällä regiimien lukumäärä tilastotieteellisen testauksen menetelmin sekä tutkimalla energiafutuureja yksittäin; ei indeksi- tai alaindeksitasolla. Tutkimuksessa käytetään päivä-, viikko- ja kuukausiaikasarjoja Brent-raakaöljyn, WTI-raakaöljyn, maakaasun, lämmitysöljyn ja polttoöljyn tuotoista aikaväliltä 1994–2014, siltä osin kuin aineistoa on saatavilla. Likelihood ratio -testin avulla estimoidaan kaikille aikasarjoille regiimien määrä,jonka jälkeen Markov regiiminvaihtomallia hyödyntäen määritetään yksittäisten regiimientuottojakaumien ominaisuudet sekä regiimien välinen transitiomatriisi. Tulokset regiimien lukumäärän osalta osoittavat, että energiafutuurien päiväkohtaisten tuottojen aikasarjoissa regiimien lukumäärä vaihtelee kolmen ja kuuden välillä. Viikko- ja kuukausituottojen kohdalla kaikkien energiafutuurien prosesseissa regiimien lukumäärä on kaksi. Kun regiimejä on enemmän kuin kaksi, on prosessilla taipumus muodostaa regiimeistä koostuvia ryhmiä. Prosessi pysyy ryhmän sisällä yleensä pitkään, koska todennäköisyys siirtyä ryhmään kuuluvien regiimien välillä on suuri. Yksittäiset regiimit ryhmän sisällä eivät kuitenkaan ole kovin pysyviä. Näin ollen prosessi vaihtelee ryhmän sisäisten regiimien välillä tiuhaan. Regiimit, jotka eivät kuulu ryhmään, ovat yleensä pysyviä, mutta prosessi ajautuu niihin vain harvoin, sillä todennäköisyys siirtyä muista regiimeistä niihin on pieni. Tutkimuksen tulokset osoittavat myös, että prosesseissa, joita ohjaa kaksi regiimiä, nämä regiimit eivät välttämättä ole pysyvät bull- ja bear-markkinatilanteet. Tulokset osoittavat sen sijaan, että bear-markkinatilanne on energiafutuureissa selvästi vähemmän pysyvä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study has two main objectives. First, the phlebotomy process at the St. Catharines Site of the Niagara Health System is investigated, which starts when an order for a blood test is placed, and ends when the specimen arrives at the lab. The performance measurement is the flow time of the process, which reflects concerns and interests of both the hospital and the patients. Three popular operational methodologies are applied to reduce the flow time and improve the process: DMAIC from Six Sigma, lean principles and simulation modeling. Potential suggestions are provided for the St. Catharines Site, which could result in an average of seven minutes reduction in the flow time. The second objective addresses the fact that these three methodologies have not been combined before in a process improvement effort. A structured framework combining them is developed to benefit future study of phlebotomy and other hospital processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il a été démontré que l’hétérotachie, variation du taux de substitutions au cours du temps et entre les sites, est un phénomène fréquent au sein de données réelles. Échouer à modéliser l’hétérotachie peut potentiellement causer des artéfacts phylogénétiques. Actuellement, plusieurs modèles traitent l’hétérotachie : le modèle à mélange des longueurs de branche (MLB) ainsi que diverses formes du modèle covarion. Dans ce projet, notre but est de trouver un modèle qui prenne efficacement en compte les signaux hétérotaches présents dans les données, et ainsi améliorer l’inférence phylogénétique. Pour parvenir à nos fins, deux études ont été réalisées. Dans la première, nous comparons le modèle MLB avec le modèle covarion et le modèle homogène grâce aux test AIC et BIC, ainsi que par validation croisée. A partir de nos résultats, nous pouvons conclure que le modèle MLB n’est pas nécessaire pour les sites dont les longueurs de branche diffèrent sur l’ensemble de l’arbre, car, dans les données réelles, le signaux hétérotaches qui interfèrent avec l’inférence phylogénétique sont généralement concentrés dans une zone limitée de l’arbre. Dans la seconde étude, nous relaxons l’hypothèse que le modèle covarion est homogène entre les sites, et développons un modèle à mélanges basé sur un processus de Dirichlet. Afin d’évaluer différents modèles hétérogènes, nous définissons plusieurs tests de non-conformité par échantillonnage postérieur prédictif pour étudier divers aspects de l’évolution moléculaire à partir de cartographies stochastiques. Ces tests montrent que le modèle à mélanges covarion utilisé avec une loi gamma est capable de refléter adéquatement les variations de substitutions tant à l’intérieur d’un site qu’entre les sites. Notre recherche permet de décrire de façon détaillée l’hétérotachie dans des données réelles et donne des pistes à suivre pour de futurs modèles hétérotaches. Les tests de non conformité par échantillonnage postérieur prédictif fournissent des outils de diagnostic pour évaluer les modèles en détails. De plus, nos deux études révèlent la non spécificité des modèles hétérogènes et, en conséquence, la présence d’interactions entre différents modèles hétérogènes. Nos études suggèrent fortement que les données contiennent différents caractères hétérogènes qui devraient être pris en compte simultanément dans les analyses phylogénétiques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les systèmes Matériels/Logiciels deviennent indispensables dans tous les aspects de la vie quotidienne. La présence croissante de ces systèmes dans les différents produits et services incite à trouver des méthodes pour les développer efficacement. Mais une conception efficace de ces systèmes est limitée par plusieurs facteurs, certains d'entre eux sont: la complexité croissante des applications, une augmentation de la densité d'intégration, la nature hétérogène des produits et services, la diminution de temps d’accès au marché. Une modélisation transactionnelle (TLM) est considérée comme un paradigme prometteur permettant de gérer la complexité de conception et fournissant des moyens d’exploration et de validation d'alternatives de conception à des niveaux d’abstraction élevés. Cette recherche propose une méthodologie d’expression de temps dans TLM basée sur une analyse de contraintes temporelles. Nous proposons d'utiliser une combinaison de deux paradigmes de développement pour accélérer la conception: le TLM d'une part et une méthodologie d’expression de temps entre différentes transactions d’autre part. Cette synergie nous permet de combiner dans un seul environnement des méthodes de simulation performantes et des méthodes analytiques formelles. Nous avons proposé un nouvel algorithme de vérification temporelle basé sur la procédure de linéarisation des contraintes de type min/max et une technique d'optimisation afin d'améliorer l'efficacité de l'algorithme. Nous avons complété la description mathématique de tous les types de contraintes présentées dans la littérature. Nous avons développé des méthodes d'exploration et raffinement de système de communication qui nous a permis d'utiliser les algorithmes de vérification temporelle à différents niveaux TLM. Comme il existe plusieurs définitions du TLM, dans le cadre de notre recherche, nous avons défini une méthodologie de spécification et simulation pour des systèmes Matériel/Logiciel basée sur le paradigme de TLM. Dans cette méthodologie plusieurs concepts de modélisation peuvent être considérés séparément. Basée sur l'utilisation des technologies modernes de génie logiciel telles que XML, XSLT, XSD, la programmation orientée objet et plusieurs autres fournies par l’environnement .Net, la méthodologie proposée présente une approche qui rend possible une réutilisation des modèles intermédiaires afin de faire face à la contrainte de temps d’accès au marché. Elle fournit une approche générale dans la modélisation du système qui sépare les différents aspects de conception tels que des modèles de calculs utilisés pour décrire le système à des niveaux d’abstraction multiples. En conséquence, dans le modèle du système nous pouvons clairement identifier la fonctionnalité du système sans les détails reliés aux plateformes de développement et ceci mènera à améliorer la "portabilité" du modèle d'application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La douleur est une expérience perceptive comportant de nombreuses dimensions. Ces dimensions de douleur sont inter-reliées et recrutent des réseaux neuronaux qui traitent les informations correspondantes. L’élucidation de l'architecture fonctionnelle qui supporte les différents aspects perceptifs de l'expérience est donc une étape fondamentale pour notre compréhension du rôle fonctionnel des différentes régions de la matrice cérébrale de la douleur dans les circuits corticaux qui sous tendent l'expérience subjective de la douleur. Parmi les diverses régions du cerveau impliquées dans le traitement de l'information nociceptive, le cortex somatosensoriel primaire et secondaire (S1 et S2) sont les principales régions généralement associées au traitement de l'aspect sensori-discriminatif de la douleur. Toutefois, l'organisation fonctionnelle dans ces régions somato-sensorielles n’est pas complètement claire et relativement peu d'études ont examiné directement l'intégration de l'information entre les régions somatiques sensorielles. Ainsi, plusieurs questions demeurent concernant la relation hiérarchique entre S1 et S2, ainsi que le rôle fonctionnel des connexions inter-hémisphériques des régions somatiques sensorielles homologues. De même, le traitement en série ou en parallèle au sein du système somatosensoriel constitue un autre élément de questionnement qui nécessite un examen plus approfondi. Le but de la présente étude était de tester un certain nombre d'hypothèses sur la causalité dans les interactions fonctionnelle entre S1 et S2, alors que les sujets recevaient des chocs électriques douloureux. Nous avons mis en place une méthode de modélisation de la connectivité, qui utilise une description de causalité de la dynamique du système, afin d'étudier les interactions entre les sites d'activation définie par un ensemble de données provenant d'une étude d'imagerie fonctionnelle. Notre paradigme est constitué de 3 session expérimentales en utilisant des chocs électriques à trois différents niveaux d’intensité, soit modérément douloureux (niveau 3), soit légèrement douloureux (niveau 2), soit complètement non douloureux (niveau 1). Par conséquent, notre paradigme nous a permis d'étudier comment l'intensité du stimulus est codé dans notre réseau d'intérêt, et comment la connectivité des différentes régions est modulée dans les conditions de stimulation différentes. Nos résultats sont en faveur du mode sériel de traitement de l’information somatosensorielle nociceptive avec un apport prédominant de la voie thalamocorticale vers S1 controlatérale au site de stimulation. Nos résultats impliquent que l'information se propage de S1 controlatéral à travers notre réseau d'intérêt composé des cortex S1 bilatéraux et S2. Notre analyse indique que la connexion S1→S2 est renforcée par la douleur, ce qui suggère que S2 est plus élevé dans la hiérarchie du traitement de la douleur que S1, conformément aux conclusions précédentes neurophysiologiques et de magnétoencéphalographie. Enfin, notre analyse fournit des preuves de l'entrée de l'information somatosensorielle dans l'hémisphère controlatéral au côté de stimulation, avec des connexions inter-hémisphériques responsable du transfert de l'information à l'hémisphère ipsilatéral.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the major concerns of scoliotic patients undergoing spinal correction surgery is the trunk's external appearance after the surgery. This paper presents a novel incremental approach for simulating postoperative trunk shape in scoliosis surgery. Preoperative and postoperative trunk shapes data were obtained using three-dimensional medical imaging techniques for seven patients with adolescent idiopathic scoliosis. Results of qualitative and quantitative evaluations, based on the comparison of the simulated and actual postoperative trunk surfaces, showed an adequate accuracy of the method. Our approach provides a candidate simulation tool to be used in a clinical environment for the surgery planning process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is concerned with Autoregressive Moving Average (ARMA) models of time series. ARMA models form a subclass of the class of general linear models which represents stationary time series, a phenomenon encountered most often in practice by engineers, scientists and economists. It is always desirable to employ models which use parameters parsimoniously. Parsimony will be achieved by ARMA models because it has only finite number of parameters. Even though the discussion is primarily concerned with stationary time series, later we will take up the case of homogeneous non stationary time series which can be transformed to stationary time series. Time series models, obtained with the help of the present and past data is used for forecasting future values. Physical science as well as social science take benefits of forecasting models. The role of forecasting cuts across all fields of management-—finance, marketing, production, business economics, as also in signal process, communication engineering, chemical processes, electronics etc. This high applicability of time series is the motivation to this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metal matrix composites (MMC) having aluminium (Al) in the matrix phase and silicon carbide particles (SiCp) in reinforcement phase, ie Al‐SiCp type MMC, have gained popularity in the re‐cent past. In this competitive age, manufacturing industries strive to produce superior quality products at reasonable price. This is possible by achieving higher productivity while performing machining at optimum combinations of process variables. The low weight and high strength MMC are found suitable for variety of components

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present an atomistic-continuum model for simulations of ultrafast laser-induced melting processes in semiconductors on the example of silicon. The kinetics of transient non-equilibrium phase transition mechanisms is addressed with MD method on the atomic level, whereas the laser light absorption, strong generated electron-phonon nonequilibrium, fast heat conduction, and photo-excited free carrier diffusion are accounted for with a continuum TTM-like model (called nTTM). First, we independently consider the applications of nTTM and MD for the description of silicon, and then construct the combined MD-nTTM model. Its development and thorough testing is followed by a comprehensive computational study of fast nonequilibrium processes induced in silicon by an ultrashort laser irradiation. The new model allowed to investigate the effect of laser-induced pressure and temperature of the lattice on the melting kinetics. Two competing melting mechanisms, heterogeneous and homogeneous, were identified in our big-scale simulations. Apart from the classical heterogeneous melting mechanism, the nucleation of the liquid phase homogeneously inside the material significantly contributes to the melting process. The simulations showed, that due to the open diamond structure of the crystal, the laser-generated internal compressive stresses reduce the crystal stability against the homogeneous melting. Consequently, the latter can take a massive character within several picoseconds upon the laser heating. Due to the large negative volume of melting of silicon, the material contracts upon the phase transition, relaxes the compressive stresses, and the subsequent melting proceeds heterogeneously until the excess of thermal energy is consumed. A series of simulations for a range of absorbed fluences allowed us to find the threshold fluence value at which homogeneous liquid nucleation starts contributing to the classical heterogeneous propagation of the solid-liquid interface. A series of simulations for a range of the material thicknesses showed that the sample width we chosen in our simulations (800 nm) corresponds to a thick sample. Additionally, in order to support the main conclusions, the results were verified for a different interatomic potential. Possible improvements of the model to account for nonthermal effects are discussed and certain restrictions on the suitable interatomic potentials are found. As a first step towards the inclusion of these effects into MD-nTTM, we performed nanometer-scale MD simulations with a new interatomic potential, designed to reproduce ab initio calculations at the laser-induced electronic temperature of 18946 K. The simulations demonstrated that, similarly to thermal melting, nonthermal phase transition occurs through nucleation. A series of simulations showed that higher (lower) initial pressure reinforces (hinders) the creation and the growth of nonthermal liquid nuclei. For the example of Si, the laser melting kinetics of semiconductors was found to be noticeably different from that of metals with a face-centered cubic crystal structure. The results of this study, therefore, have important implications for interpretation of experimental data on the kinetics of melting process of semiconductors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous psychophysical experiments have shown an important role for attentional modulations in vision. Behaviorally, allocation of attention can improve performance in object detection and recognition tasks. At the neural level, attention increases firing rates of neurons in visual cortex whose preferred stimulus is currently attended to. However, it is not yet known how these two phenomena are linked, i.e., how the visual system could be "tuned" in a task-dependent fashion to improve task performance. To answer this question, we performed simulations with the HMAX model of object recognition in cortex [45]. We modulated firing rates of model neurons in accordance with experimental results about effects of feature-based attention on single neurons and measured changes in the model's performance in a variety of object recognition tasks. It turned out that recognition performance could only be improved under very limited circumstances and that attentional influences on the process of object recognition per se tend to display a lack of specificity or raise false alarm rates. These observations lead us to postulate a new role for the observed attention-related neural response modulations.