914 resultados para Multi-phase Modelling
Resumo:
Canonical Monte Carlo simulations for the Au(210)/H(2)O interface, using a force field recently proposed by us, are reported. The results exhibit the main features normally observed in simulations of water molecules in contact with different noble metal surfaces. The calculations also assess the influence of the surface topography on the structural aspects of the adsorbed water and on the distribution of the water molecules in the direction normal to the metal surface plane. The adsorption process is preferential at sites in the first layer of the metal. The analysis of the density profiles and dipole moment distributions points to two predominant orientations. Most of the molecules are adsorbed with the molecular plane parallel to surface, while others adsorb with one of the O-H bonds parallel to the surface and the other bond pointing towards the bulk liquid phase. There is also evidence of hydrogen bond formation between the first and second solvent layers at the interface. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.
Resumo:
The study reported here is part of a large project for evaluation of the Thermo-Chemical Accumulator (TCA), a technology under development by the Swedish company ClimateWell AB. The studies concentrate on the use of the technology for comfort cooling. This report concentrates on measurements in the laboratory, modelling and system simulation. The TCA is a three-phase absorption heat pump that stores energy in the form of crystallised salt, in this case Lithium Chloride (LiCl) with water being the other substance. The process requires vacuum conditions as with standard absorption chillers using LiBr/water. Measurements were carried out in the laboratories at the Solar Energy Research Center SERC, at Högskolan Dalarna as well as at ClimateWell AB. The measurements at SERC were performed on a prototype version 7:1 and showed that this prototype had several problems resulting in poor and unreliable performance. The main results were that: there was significant corrosion leading to non-condensable gases that in turn caused very poor performance; unwanted crystallisation caused blockages as well as inconsistent behaviour; poor wetting of the heat exchangers resulted in relatively high temperature drops there. A measured thermal COP for cooling of 0.46 was found, which is significantly lower than the theoretical value. These findings resulted in a thorough redesign for the new prototype, called ClimateWell 10 (CW10), which was tested briefly by the authors at ClimateWell. The data collected here was not large, but enough to show that the machine worked consistently with no noticeable vacuum problems. It was also sufficient for identifying the main parameters in a simulation model developed for the TRNSYS simulation environment, but not enough to verify the model properly. This model was shown to be able to simulate the dynamic as well as static performance of the CW10, and was then used in a series of system simulations. A single system model was developed as the basis of the system simulations, consisting of a CW10 machine, 30 m2 flat plate solar collectors with backup boiler and an office with a design cooling load in Stockholm of 50 W/m2, resulting in a 7.5 kW design load for the 150 m2 floor area. Two base cases were defined based on this: one for Stockholm using a dry cooler with design cooling rate of 30 kW; one for Madrid with a cooling tower with design cooling rate of 34 kW. A number of parametric studies were performed based on these two base cases. These showed that the temperature lift is a limiting factor for cooling for higher ambient temperatures and for charging with fixed temperature source such as district heating. The simulated evacuated tube collector performs only marginally better than a good flat plate collector if considering the gross area, the margin being greater for larger solar fractions. For 30 m2 collector a solar faction of 49% and 67% were achieved for the Stockholm and Madrid base cases respectively. The average annual efficiency of the collector in Stockholm (12%) was much lower than that in Madrid (19%). The thermal COP was simulated to be approximately 0.70, but has not been possible to verify with measured data. The annual electrical COP was shown to be very dependent on the cooling load as a large proportion of electrical use is for components that are permanently on. For the cooling loads studied, the annual electrical COP ranged from 2.2 for a 2000 kWh cooling load to 18.0 for a 21000 kWh cooling load. There is however a potential to reduce the electricity consumption in the machine, which would improve these figures significantly. It was shown that a cooling tower is necessary for the Madrid climate, whereas a dry cooler is sufficient for Stockholm although a cooling tower does improve performance. The simulation study was very shallow and has shown a number of areas that are important to study in more depth. One such area is advanced control strategy, which is necessary to mitigate the weakness of the technology (low temperature lift for cooling) and to optimally use its strength (storage).
Resumo:
As a first step in assessing the potential of thermal energy storage in Swedish buildings, the current situation of the Swedish building stock and different storage methods are discussed in this paper. Overall, many buildings are from the 1960’s or earlier having a relatively high energy demand, creating opportunities for large energy savings. The major means of heating are electricity for detached houses and district heating for multi dwelling houses and premises. Cooling needs are relatively low but steadily increasing, emphasizing the need to consider energy storage for both heat and cold. The thermal mass of a building is important for passive storage of thermal energy but this has not been considered much when constructing buildings in Sweden. Instead, common ways of storing thermal energy in Swedish buildings today is in water storage tanks or in the ground using boreholes, while latent thermal energy storage is still very uncommon.
Resumo:
This paper reports the findings of using multi-agent based simulation model to evaluate the sawmill yard operations within a large privately owned sawmill in Sweden, Bergkvist Insjön AB in the current case. Conventional working routines within sawmill yard threaten the overall efficiency and thereby limit the profit margin of sawmill. Deploying dynamic work routines within the sawmill yard is not readily feasible in real time, so discrete event simulation model has been investigated to be able to report optimal work order depending on the situations. Preliminary investigations indicate that the results achieved by simulation model are promising. It is expected that the results achieved in the current case will support Bergkvist-Insjön AB in making optimal decisions by deploying efficient work order in sawmill yard.
Resumo:
Determining the provenance of data, i.e. the process that led to that data, is vital in many disciplines. For example, in science, the process that produced a given result must be demonstrably rigorous for the result to be deemed reliable. A provenance system supports applications in recording adequate documentation about process executions to answer queries regarding provenance, and provides functionality to perform those queries. Several provenance systems are being developed, but all focus on systems in which the components are textitreactive, for example Web Services that act on the basis of a request, job submission system, etc. This limitation means that questions regarding the motives of autonomous actors, or textitagents, in such systems remain unanswerable in the general case. Such questions include: who was ultimately responsible for a given effect, what was their reason for initiating the process and does the effect of a process match what was intended to occur by those initiating the process? In this paper, we address this limitation by integrating two solutions: a generic, re-usable framework for representing the provenance of data in service-oriented architectures and a model for describing the goal-oriented delegation and engagement of agents in multi-agent systems. Using these solutions, we present algorithms to answer common questions regarding responsibility and success of a process and evaluate the approach with a simulated healthcare example.
Resumo:
Mirroring the paper versions exchanged between businesses today, electronic contracts offer the possibility of dynamic, automatic creation and enforcement of restrictions and compulsions on agent behaviour that are designed to ensure business objectives are met. However, where there are many contracts within a particular application, it can be difficult to determine whether the system can reliably fulfil them all; computer-parsable electronic contracts may allow such verification to be automated. In this paper, we describe a conceptual framework and architecture specification in which normative business contracts can be electronically represented, verified, established, renewed, etc. In particular, we aim to allow systems containing multiple contracts to be checked for conflicts and violations of business objectives. We illustrate the framework and architecture with an aerospace example.
Resumo:
The presented work deals with the calibration of a 2D numerical model for the simulation of long term bed load transport. A settled basin along an alpine stream was used as a case study. The focus is to parameterise the used multi fractional transport model such that a dynamically balanced behavior regarding erosion and deposition is reached. The used 2D hydrodynamic model utilizes a multi-fraction multi-layer approach to simulate morphological changes and bed load transport. The mass balancing is performed between three layers: a top mixing layer, an intermediate subsurface layer and a bottom layer. Using this approach bears computational limitations in calibration. Due to the high computational demands, the type of calibration strategy is not only crucial for the result, but as well for the time required for calibration. Brute force methods such as Monte Carlo type methods may require a too large number of model runs. All here tested calibration strategies used multiple model runs utilising the parameterization and/or results from previous run. One concept was to reset to initial bed elevations after each run, allowing the resorting process to convert to stable conditions. As an alternative or in combination, the roughness was adapted, based on resulting nodal grading curves, from the previous run. Since the adaptations are a spatial process, the whole model domain is subdivided in homogeneous sections regarding hydraulics and morphological behaviour. For a faster optimization, the adaptation of the parameters is made section wise. Additionally, a systematic variation was done, considering results from previous runs and the interaction between sections. The used approach can be considered as similar to evolutionary type calibration approaches, but using analytical links instead of random parameter changes.
Resumo:
In this paper the architecture of an experimental multiparadigmatic programming environment is sketched, showing how its parts combine together with application modules in order to perform the integration of program modules written in different programming languages and paradigms. Adaptive automata are special self-modifying formal state machines used as a design and implementation tool in the representation of complex systems. Adaptive automata have been proven to have the same formal power as Turing Machines. Therefore, at least in theory, arbitrarily complex systems may be modeled with adaptive automata. The present work briefly introduces such formal tool and presents case studies showing how to use them in two very different situations: the first one, in the name management module of a multi-paradigmatic and multi-language programming environment, and the second one, in an application program implementing an adaptive automaton that accepts a context-sensitive language.
Resumo:
Equipment maintenance is the major cost factor in industrial plants, it is very important the development of fault predict techniques. Three-phase induction motors are key electrical equipments used in industrial applications mainly because presents low cost and large robustness, however, it isn t protected from other fault types such as shorted winding and broken bars. Several acquisition ways, processing and signal analysis are applied to improve its diagnosis. More efficient techniques use current sensors and its signature analysis. In this dissertation, starting of these sensors, it is to make signal analysis through Park s vector that provides a good visualization capability. Faults data acquisition is an arduous task; in this way, it is developed a methodology for data base construction. Park s transformer is applied into stationary reference for machine modeling of the machine s differential equations solution. Faults detection needs a detailed analysis of variables and its influences that becomes the diagnosis more complex. The tasks of pattern recognition allow that systems are automatically generated, based in patterns and data concepts, in the majority cases undetectable for specialists, helping decision tasks. Classifiers algorithms with diverse learning paradigms: k-Neighborhood, Neural Networks, Decision Trees and Naïves Bayes are used to patterns recognition of machines faults. Multi-classifier systems are used to improve classification errors. It inspected the algorithms homogeneous: Bagging and Boosting and heterogeneous: Vote, Stacking and Stacking C. Results present the effectiveness of constructed model to faults modeling, such as the possibility of using multi-classifiers algorithm on faults classification
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper presents a hybrid way mixing time and frequency domain for transmission lines modelling. The proposed methodology handles steady fundamental signal mixed with fast and slow transients, including impulsive and oscillatory behaviour. A transmission line model is developed based on lumped elements representation and state-space techniques. The proposed methodology represents an easy and practical procedure to model a three-phase transmission line directly in time domain, without the explicit use of inverse transforms. The proposed methodology takes into account the frequency-dependent parameters of the line, considering the soil and skin effects. In order to include this effect in the state matrices, a fitting method is applied. Furthermore the accuracy of proposed the developed model is verified, in frequency domain, by a simple methodology based on line distributed parameters and transfer function related to the input/output signals of the lumped parameters representation. In addition, this article proposes the use of a fast and robust analytic integration procedure to solve the state equations, enabling transient and steady-state simulations. The results are compared with those obtained by the commercial software Microtran (EMTP), taking into account a three-phase transmission line, typical in the Brazilian transmission system.
Resumo:
Background: The effects of gonadotrophin-releasing hormone agonist (GnRH-a) administered in the luteal phase remains controversial. This meta-analysis aimed to evaluate the effect of the administration of a single-dose of GnRH-a in the luteal phase on ICSI clinical outcomes.Methods: The research strategy included the online search of databases. Only randomized studies were included. The outcomes analyzed were implantation rate, clinical pregnancy rate (CPR) per transfer and ongoing pregnancy rate. The fixed effects model was used for odds ratio. In all trials, a single dose of GnRH-a was administered at day 5/6 after ICSI procedures.Results: All cycles presented statistically significantly higher rates of implantation (P < 0.0001), CPR per transfer (P = 0.006) and ongoing pregnancy (P = 0.02) in the group that received luteal-phase GnRH-a administration than in the control group (without luteal-phase-GnRH-a administration). When meta-analysis was carried out only in trials that had used long GnRH-a ovarian stimulation protocol, CPR per transfer (P = 0.06) and ongoing pregnancy (P = 0.23) rates were not significantly different between the groups, but implantation rate was significant higher (P = 0.02) in the group that received luteal-phase-GnRH-a administration. on the other hand, the results from trials that had used GnRH antagonist multi-dose ovarian stimulation protocol showed statistically significantly higher implantation (P = 0.0002), CPR per transfer (P = 0.04) and ongoing pregnancy rate (P = 0.04) in the luteal-phaseGnRH- a administration group. The majority of the results presented heterogeneity.Conclusions: These findings demonstrate that the luteal-phase single-dose GnRH-a administration can increase implantation rate in all cycles and CPR per transfer and ongoing pregnancy rate in cycles with GnRH antagonist ovarian stimulation protocol. Nevertheless, by considering the heterogeneity between the trials, it seems premature to recommend the use of GnRH-a in the luteal phase. Additional randomized controlled trials are necessary before evidence-based recommendations can be provided.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Driven by the challenges involved in the development of new advanced materials with unusual drug delivery profiles capable of improving the therapeutic and toxicological properties of existing cancer chemotherapy, the one-pot sol-gel synthesis of flexible, transparent and insoluble urea-cross-linked polyether-siloxane hybrids has been recently developed. In this one-pot synthesis, the strong interaction between the antitumor cisplatin (CisPt) molecules and the ureasil-poly(propylene oxide) (PPO) hybrid matrix gives rise to the incorporation and release of an unknown CisPt-derived species, hindering the quantitative determination of the drug release pattern from the conventional UV-Vis absorption technique. In this article, we report the use of an original synchrotron radiation calibration method based on the combination of XAS and UV-Vis for the quantitative determination of the amount of Pt-based molecules released in water. Thanks to the combination of UV-Vis, XAS and Raman techniques, we demonstrated that both the CisPt molecules and the CisPt-derived species are loaded into an ureasil-PPO/ureasil-poly(ethylene oxide) (PEO) hybrid blend matrix. The experimentally determined molar extinction coefficient of the CisPt-derived species loaded into ureasil-PPO hybrid matrix enabled the simultaneous time-resolved monitoring of each Pt species released from this hybrid blend matrix.