958 resultados para EFFICIENT SIMULATION
Resumo:
This thesis deals with the development of a novel simulation technique for macromolecules in electrolyte solutions, with the aim of a performance improvement over current molecular-dynamics based simulation methods. In solutions containing charged macromolecules and salt ions, it is the complex interplay of electrostatic interactions and hydrodynamics that determines the equilibrium and non-equilibrium behavior. However, the treatment of the solvent and dissolved ions makes up the major part of the computational effort. Thus an efficient modeling of both components is essential for the performance of a method. With the novel method we approach the solvent in a coarse-grained fashion and replace the explicit-ion description by a dynamic mean-field treatment. Hence we combine particle- and field-based descriptions in a hybrid method and thereby effectively solve the electrokinetic equations. The developed algorithm is tested extensively in terms of accuracy and performance, and suitable parameter sets are determined. As a first application we study charged polymer solutions (polyelectrolytes) in shear flow with focus on their viscoelastic properties. Here we also include semidilute solutions, which are computationally demanding. Secondly we study the electro-osmotic flow on superhydrophobic surfaces, where we perform a detailed comparison to theoretical predictions.
Resumo:
The rapid development in the field of lighting and illumination allows low energy consumption and a rapid growth in the use, and development of solid-state sources. As the efficiency of these devices increases and their cost decreases there are predictions that they will become the dominant source for general illumination in the short term. The objective of this thesis is to study, through extensive simulations in realistic scenarios, the feasibility and exploitation of visible light communication (VLC) for vehicular ad hoc networks (VANETs) applications. A brief introduction will introduce the new scenario of smart cities in which visible light communication will become a fundamental enabling technology for the future communication systems. Specifically, this thesis focus on the acquisition of several, frequent, and small data packets from vehicles, exploited as sensors of the environment. The use of vehicles as sensors is a new paradigm to enable an efficient environment monitoring and an improved traffic management. In most cases, the sensed information must be collected at a remote control centre and one of the most challenging aspects is the uplink acquisition of data from vehicles. My thesis discusses the opportunity to take advantage of short range vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications to offload the cellular networks. More specifically, it discusses the system design and assesses the obtainable cellular resource saving, by considering the impact of the percentage of vehicles equipped with short range communication devices, of the number of deployed road side units, and of the adopted routing protocol. When short range communications are concerned, WAVE/IEEE 802.11p is considered as standard for VANETs. Its use together with VLC will be considered in urban vehicular scenarios to let vehicles communicate without involving the cellular network. The study is conducted by simulation, considering both a simulation platform (SHINE, simulation platform for heterogeneous interworking networks) developed within the Wireless communication Laboratory (Wilab) of the University of Bologna and CNR, and network simulator (NS3). trying to realistically represent all the wireless network communication aspects. Specifically, simulation of vehicular system was performed and introduced in ns-3, creating a new module for the simulator. This module will help to study VLC applications in VANETs. Final observations would enhance and encourage potential research in the area and optimize performance of VLC systems applications in the future.
Resumo:
In this thesis I present a new coarse-grained model suitable to investigate the phase behavior of rod-coil block copolymers on mesoscopic length scales. In this model the rods are represented by hard spherocylinders, whereas the coil block consists of interconnected beads. The interactions between the constituents are based on local densities. This facilitates an efficient Monte-Carlo sampling of the phase space. I verify the applicability of the model and the simulation approach by means of several examples. I treat pure rod systems and mixtures of rod and coil polymers. Then I append coils to the rods and investigate the role of the different model parameters. Furthermore, I compare different implementations of the model. I prove the capability of the rod-coil block copolymers in our model to exhibit typical micro-phase separated configurations as well as extraordinary phases, such as the wavy lamellar state, percolating structuresrnand clusters. Additionally, I demonstrate the metastability of the observed zigzag phase in our model. A central point of this thesis is the examination of the phase behavior of the rod-coil block copolymers in dependence of different chain lengths and interaction strengths between rods and coil. The observations of these studies are summarized in a phase diagram for rod-coil block copolymers. Furthermore, I validate a stabilization of the smectic phase with increasing coil fraction.rnIn the second part of this work I present a side project in which I derive a model permitting the simulation of tetrapods with and without grafted semiconducting block copolymers. The effect of these polymers is added in an implicit manner by effective interactions between the tetrapods. While the depletion interaction is described in an approximate manner within the Asakura-Oosawa model, the free energy penalty for the brush compression is calculated within the Alexander-de Gennes model. Recent experiments with CdSe tetrapods show that grafted tetrapods are clearly much better dispersed in the polymer matrix than bare tetrapods. My simulations confirm that bare tetrapods tend to aggregate in the matrix of excess polymers, while clustering is significantly reduced after grafting polymer chains to the tetrapods. Finally, I propose a possible extension enabling the simulation of a system with fluctuating volume and demonstrate its basic functionality. This study is originated in a cooperation with an experimental group with the goal to analyze the morphology of these systems in order to find the ideal morphology for hybrid solar cells.
Resumo:
Globalization has increased the pressure on organizations and companies to operate in the most efficient and economic way. This tendency promotes that companies concentrate more and more on their core businesses, outsource less profitable departments and services to reduce costs. By contrast to earlier times, companies are highly specialized and have a low real net output ratio. For being able to provide the consumers with the right products, those companies have to collaborate with other suppliers and form large supply chains. An effect of large supply chains is the deficiency of high stocks and stockholding costs. This fact has lead to the rapid spread of Just-in-Time logistic concepts aimed minimizing stock by simultaneous high availability of products. Those concurring goals, minimizing stock by simultaneous high product availability, claim for high availability of the production systems in the way that an incoming order can immediately processed. Besides of design aspects and the quality of the production system, maintenance has a strong impact on production system availability. In the last decades, there has been many attempts to create maintenance models for availability optimization. Most of them concentrated on the availability aspect only without incorporating further aspects as logistics and profitability of the overall system. However, production system operator’s main intention is to optimize the profitability of the production system and not the availability of the production system. Thus, classic models, limited to represent and optimize maintenance strategies under the light of availability, fail. A novel approach, incorporating all financial impacting processes of and around a production system, is needed. The proposed model is subdivided into three parts, maintenance module, production module and connection module. This subdivision provides easy maintainability and simple extendability. Within those modules, all aspect of production process are modeled. Main part of the work lies in the extended maintenance and failure module that offers a representation of different maintenance strategies but also incorporates the effect of over-maintaining and failed maintenance (maintenance induced failures). Order release and seizing of the production system are modeled in the production part. Due to computational power limitation, it was not possible to run the simulation and the optimization with the fully developed production model. Thus, the production model was reduced to a black-box without higher degree of details.
Resumo:
We propose a computationally efficient and biomechanically relevant soft-tissue simulation method for cranio-maxillofacial (CMF) surgery. A template-based facial muscle reconstruction was introduced to minimize the efforts on preparing a patient-specific model. A transversely isotropic mass-tensor model (MTM) was adopted to realize the effect of directional property of facial muscles in reasonable computation time. Additionally, sliding contact around teeth and mucosa was considered for more realistic simulation. Retrospective validation study with postoperative scan of a real patient showed that there were considerable improvements in simulation accuracy by incorporating template-based facial muscle anatomy and sliding contact.
Resumo:
A large number of proposals for estimating the bivariate survival function under random censoring has been made. In this paper we discuss nonparametric maximum likelihood estimation and the bivariate Kaplan-Meier estimator of Dabrowska. We show how these estimators are computed, present their intuitive background and compare their practical performance under different levels of dependence and censoring, based on extensive simulation results, which leads to a practical advise.
Resumo:
In biostatistical applications, interest often focuses on the estimation of the distribution of time T between two consecutive events. If the initial event time is observed and the subsequent event time is only known to be larger or smaller than an observed monitoring time, then the data is described by the well known singly-censored current status model, also known as interval censored data, case I. We extend this current status model by allowing the presence of a time-dependent process, which is partly observed and allowing C to depend on T through the observed part of this time-dependent process. Because of the high dimension of the covariate process, no globally efficient estimators exist with a good practical performance at moderate sample sizes. We follow the approach of Robins and Rotnitzky (1992) by modeling the censoring variable, given the time-variable and the covariate-process, i.e., the missingness process, under the restriction that it satisfied coarsening at random. We propose a generalization of the simple current status estimator of the distribution of T and of smooth functionals of the distribution of T, which is based on an estimate of the missingness. In this estimator the covariates enter only through the estimate of the missingness process. Due to the coarsening at random assumption, the estimator has the interesting property that if we estimate the missingness process more nonparametrically, then we improve its efficiency. We show that by local estimation of an optimal model or optimal function of the covariates for the missingness process, the generalized current status estimator for smooth functionals become locally efficient; meaning it is efficient if the right model or covariate is consistently estimated and it is consistent and asymptotically normal in general. Estimation of the optimal model requires estimation of the conditional distribution of T, given the covariates. Any (prior) knowledge of this conditional distribution can be used at this stage without any risk of losing root-n consistency. We also propose locally efficient one step estimators. Finally, we show some simulation results.
Resumo:
Estimation for bivariate right censored data is a problem that has had much study over the past 15 years. In this paper we propose a new class of estimators for the bivariate survival function based on locally efficient estimation. We introduce the locally efficient estimator for bivariate right censored data, present an asymptotic theorem, present the results of simulation studies and perform a brief data analysis illustrating the use of the locally efficient estimator.
Resumo:
Currently photon Monte Carlo treatment planning (MCTP) for a patient stored in the patient database of a treatment planning system (TPS) can usually only be performed using a cumbersome multi-step procedure where many user interactions are needed. This means automation is needed for usage in clinical routine. In addition, because of the long computing time in MCTP, optimization of the MC calculations is essential. For these purposes a new graphical user interface (GUI)-based photon MC environment has been developed resulting in a very flexible framework. By this means appropriate MC transport methods are assigned to different geometric regions by still benefiting from the features included in the TPS. In order to provide a flexible MC environment, the MC particle transport has been divided into different parts: the source, beam modifiers and the patient. The source part includes the phase-space source, source models and full MC transport through the treatment head. The beam modifier part consists of one module for each beam modifier. To simulate the radiation transport through each individual beam modifier, one out of three full MC transport codes can be selected independently. Additionally, for each beam modifier a simple or an exact geometry can be chosen. Thereby, different complexity levels of radiation transport are applied during the simulation. For the patient dose calculation, two different MC codes are available. A special plug-in in Eclipse providing all necessary information by means of Dicom streams was used to start the developed MC GUI. The implementation of this framework separates the MC transport from the geometry and the modules pass the particles in memory; hence, no files are used as the interface. The implementation is realized for 6 and 15 MV beams of a Varian Clinac 2300 C/D. Several applications demonstrate the usefulness of the framework. Apart from applications dealing with the beam modifiers, two patient cases are shown. Thereby, comparisons are performed between MC calculated dose distributions and those calculated by a pencil beam or the AAA algorithm. Interfacing this flexible and efficient MC environment with Eclipse allows a widespread use for all kinds of investigations from timing and benchmarking studies to clinical patient studies. Additionally, it is possible to add modules keeping the system highly flexible and efficient.
Resumo:
PURPOSE: To compare objective fellow and expert efficiency indices for an interventional radiology renal artery stenosis skill set with the use of a high-fidelity simulator. MATERIALS AND METHODS: The Mentice VIST simulator was used for three different renal artery stenosis simulations of varying difficulty, which were used to grade performance. Fellows' indices at three intervals throughout 1 year were compared to expert baseline performance. Seventy-four simulated procedures were performed, 63 of which were captured as audiovisual recordings. Three levels of fellow experience were analyzed: 1, 6, and 12 months of dedicated interventional radiology fellowship. The recordings were compiled on a computer workstation and analyzed. Distinct measurable events in the procedures were identified with task analysis, and data regarding efficiency were extracted. Total scores were calculated as the product of procedure time, fluoroscopy time, tools, and contrast agent volume. The lowest scores, which reflected efficient use of tools, radiation, and time, were considered to indicate proficiency. Subjective analysis of participants' procedural errors was not included in this analysis. RESULTS: Fellows' mean scores diminished from 1 month to 12 months (42,960 at 1 month, 18,726 at 6 months, and 9,636 at 12 months). The experts' mean score was 4,660. In addition, the range of variance in score diminished with increasing experience (from a range of 5,940-120,156 at 1 month to 2,436-85,272 at 6 months and 2,160-32,400 at 12 months). Expert scores ranged from 1,450 to 10,800. CONCLUSIONS: Objective efficiency indices for simulated procedures can demonstrate scores directly comparable to the level of clinical experience.
Resumo:
This paper studies the energy-efficiency and service characteristics of a recently developed energy-efficient MAC protocol for wireless sensor networks in simulation and on a real sensor hardware testbed. This opportunity is seized to illustrate how simulation models can be verified by cross-comparing simulation results with real-world experiment results. The paper demonstrates that by careful calibration of simulation model parameters, the inevitable gap between simulation models and real-world conditions can be reduced. It concludes with guidelines for a methodology for model calibration and validation of sensor network simulation models.
Resumo:
I modeled the cumulative impact of hydroelectric projects with and without commercial fishing weirs and water-control dams on the production, survival to the sea, and potential fecundity of migrating female silver-phase American eels, Anguilla rostrata in the Kennebec River basin, Maine, This river basin has 22 hydroelectric projects, 73 water-control dams, and 15 commercial fishing weir sites. The modeled area included an 8,324 km(2) segment of the drainage area between Merrymeeting Bay and the upper limit of American eel distribution in the basin. One set of input,, (assumed or real values) concerned population structure (Le., population density and sex ratio changes throughout the basin, female length-class distribution, and drainage area between dams), Another set concerned factors influencing survival and potential fecundity of migrating American eels (i.e., pathway sequences through projects, survival rate per project by length-class. and length-fecundity relationship). Under baseline conditions about 402,400 simulated silver female American eels would be produced annually reductions in their numbers due to dams and weirs would reduce the realized fecundity (i.e., the number of eggs produced by all females that survived the migration). Without weirs or water-control dams, about 63% of the simulated silverphase American eels survived their freshwater spawning migration run to the sea when the survival rate at each hydroelectric dam was 9017, 40% survived at 80% survival per dam, and 18% survived at 60% survival per dam. Removing the lowermost hydroelectric dam on the Kennebec River increased survival by 6.0-7.6% for the basin. The efficient commercial weirs reduced survival to the sea to 69-76%( of what it would have been without weirs', regardless of survival rates at hydroelectric dams. Water-control dams had little impact on production in this basin because most were located in the upper reaches of tributaries. Sensitivity analysis led to the conclusion that small changes in population density and female length distribution had greater effects on survival and realized fecundity than similar changes in turbine survival rate. The latter became more important as turbine survival rate decreased. Therefore, it might be more fruitful to determine population distribution in basins of interest than to determine mortality rate at each hydroelectric project.
Resumo:
We consider a large quantum system with spins 12 whose dynamics is driven entirely by measurements of the total spin of spin pairs. This gives rise to a dissipative coupling to the environment. When one averages over the measurement results, the corresponding real-time path integral does not suffer from a sign problem. Using an efficient cluster algorithm, we study the real-time evolution from an initial antiferromagnetic state of the two-dimensional Heisenberg model, which is driven to a disordered phase, not by a Hamiltonian, but by sporadic measurements or by continuous Lindblad evolution.
Resumo:
During the last decade wireless mobile communications have progressively become part of the people’s daily lives, leading users to expect to be “alwaysbest-connected” to the Internet, regardless of their location or time of day. This is indeed motivated by the fact that wireless access networks are increasingly ubiquitous, through different types of service providers, together with an outburst of thoroughly portable devices, namely laptops, tablets, mobile phones, among others. The “anytime and anywhere” connectivity criterion raises new challenges regarding the devices’ battery lifetime management, as energy becomes the most noteworthy restriction of the end-users’ satisfaction. This wireless access context has also stimulated the development of novel multimedia applications with high network demands, although lacking in energy-aware design. Therefore, the relationship between energy consumption and the quality of the multimedia applications perceived by end-users should be carefully investigated. This dissertation addresses energy-efficient multimedia communications in the IEEE 802.11 standard, which is the most widely used wireless access technology. It advances the literature by proposing a unique empirical assessment methodology and new power-saving algorithms, always bearing in mind the end-users’ feedback and evaluating quality perception. The new EViTEQ framework proposed in this thesis, for measuring video transmission quality and energy consumption simultaneously, in an integrated way, reveals the importance of having an empirical and high-accuracy methodology to assess the trade-off between quality and energy consumption, raised by the new end-users’ requirements. Extensive evaluations conducted with the EViTEQ framework revealed its flexibility and capability to accurately report both video transmission quality and energy consumption, as well as to be employed in rigorous investigations of network interface energy consumption patterns, regardless of the wireless access technology. Following the need to enhance the trade-off between energy consumption and application quality, this thesis proposes the Optimized Power save Algorithm for continuous Media Applications (OPAMA). By using the end-users’ feedback to establish a proper trade-off between energy consumption and application performance, OPAMA aims at enhancing the energy efficiency of end-users’ devices accessing the network through IEEE 802.11. OPAMA performance has been thoroughly analyzed within different scenarios and application types, including a simulation study and a real deployment in an Android testbed. When compared with the most popular standard power-saving mechanisms defined in the IEEE 802.11 standard, the obtained results revealed OPAMA’s capability to enhance energy efficiency, while keeping end-users’ Quality of Experience within the defined bounds. Furthermore, OPAMA was optimized to enable superior energy savings in multiple station environments, resulting in a new proposal called Enhanced Power Saving Mechanism for Multiple station Environments (OPAMA-EPS4ME). The results of this thesis highlight the relevance of having a highly accurate methodology to assess energy consumption and application quality when aiming to optimize the trade-off between energy and quality. Additionally, the obtained results based both on simulation and testbed evaluations, show clear benefits from employing userdriven power-saving techniques, such as OPAMA, instead of IEEE 802.11 standard power-saving approaches.
Resumo:
In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^