935 resultados para Simple overlap model
Resumo:
This thesis presents a universal model of documents and deltas. This model formalize what it means to find differences between documents and to shows a single shared formalization that can be used by any algorithm to describe the differences found between any kind of comparable documents. The main scientific contribution of this thesis is a universal delta model that can be used to represent the changes found by an algorithm. The main part of this model are the formal definition of changes (the pieces of information that records that something has changed), operations (the definitions of the kind of change that happened) and deltas (coherent summaries of what has changed between two documents). The fundamental mechanism tha makes the universal delta model a very expressive tool is the use of encapsulation relations between changes. In the universal delta model, changes are not always simple records of what has changed, they can also be combined into more complex changes that reflects the detection of more meaningful modifications. In addition to the main entities (i.e., changes, operations and deltas), the model describes and defines also documents and the concept of equivalence between documents. As a corollary to the model, there is also an extensible catalog of possible operations that algorithms can detect, used to create a common library of operations, and an UML serialization of the model, useful as a reference when implementing APIs that deal with deltas. The universal delta model presented in this thesis acts as the formal groundwork upon which algorithm can be based and libraries can be implemented. It removes the need to recreate a new delta model and terminology whenever a new algorithm is devised. It also alleviates the problems that toolmakers have when adapting their software to new diff algorithms.
Resumo:
In this thesis, three nitroxide based ionic systems were used to investigate structure and dynamics of their respective solutions in mixed solvents by means of electron paramagnetic resonance (EPR) and electron nuclear double resonance (ENDOR) spectroscopy at X- and W-band (9.5 and 94.5 GHz, respectively). rnFirst, the solvation of the inorganic radical Fremy’s salt (K2ON(SO3)2) in isotope substituted binary solvent mixtures (methanol/water) was investigated by means of high-field (W-band) pulse ENDOR spectroscopy and molecular dynamics (MD) simulations. From the analysis of orientation-selective 1H and 2H ENDOR spectra the principal components of the hyperfine coupling (hfc) tensor for chemically different protons (alcoholic methyl vs. exchangeable protons) were obtained. The methyl protons of the organic solvent approach with a mean distance of 3.5 Å perpendicular to the approximate plane spanned by ON(S)2 of the probe molecule. Exchangeable protons were found to be distributed isotropically, approaching closest to Fremy’s salt from the hydrogen-bonded network around the sulfonate groups. The distribution of exchangeable and methyl protons as found in MD simulations is in full agreement with the ENDOR results. The solvation was found to be similar for the studied solvent ratios between 1:2.3 and 2.3:1 and dominated by an interplay of H-bond (electrostatic) interactions and steric considerations with the NO group merely involved into H-bonds.rnFurther, the conformation of spin labeled poly(diallyldimethylammonium chloride) (PDADMAC) solutions in aqueous alcohol (methanol, ethanol, n-propanol, ethylene glycol, glycerol) mixtures in dependence of divalent sodium sulfate was investigated with double electron-electron resonance (DEER) spectroscopy. The DEER data was analyzed using the worm-like chain model which suggests that in organic-water solvent mixtures the polymer backbones are preferentially solvated by the organic solvent. We found a less serve impact on conformational changes due to salt than usually predicted in polyelectrolyte theory which stresses the importance of a delicate balance of hydrophobic and electrostatic interactions, in particular in the presence of organic solvents.rnFinally, the structure and dynamics of miniemulsions and polymerdispersions prepared with anionic surfactants, that were partially replaced by a spin labeled fatty acid in presence and absence of a lanthanide beta-diketonate complex was characterized by CW EPR spectroscopy. Such miniemulsions form multilayers with the surfactant head group bound to the lanthanide ion. Beta-diketonates were formerly used as NMR shift reagents and nowadays find application as luminescent materials in OLEDs and LCDs and as contrast agent in MRT. The embedding of the complex into a polymer matrix results in an easy processable material. It was found that the structure formation takes place in miniemulsion and is preserved during polymerization. For surfactants with carboxyl-head group a higher order of the alkyl chains and less lateral diffusion is found than for sulfat-head groups, suggesting a more uniform and stronger coordination to the metal ion. The stability of these bilayers depends on the temperature and the used surfactant which should be considered for the used polymerization temperature if a maximum output of the structured regions is wished.
Resumo:
The ability of block copolymers to spontaneously self-assemble into a variety of ordered nano-structures not only makes them a scientifically interesting system for the investigation of order-disorder phase transitions, but also offers a wide range of nano-technological applications. The architecture of a diblock is the most simple among the block copolymer systems, hence it is often used as a model system in both experiment and theory. We introduce a new soft-tetramer model for efficient computer simulations of diblock copolymer melts. The instantaneous non-spherical shape of polymer chains in molten state is incorporated by modeling each of the two blocks as two soft spheres. The interactions between the spheres are modeled in a way that the diblock melt tends to microphase separate with decreasing temperature. Using Monte Carlo simulations, we determine the equilibrium structures at variable values of the two relevant control parameters, the diblock composition and the incompatibility of unlike components. The simplicity of the model allows us to scan the control parameter space in a completeness that has not been reached in previous molecular simulations.The resulting phase diagram shows clear similarities with the phase diagram found in experiments. Moreover, we show that structural details of block copolymer chains can be reproduced by our simple model.We develop a novel method for the identification of the observed diblock copolymer mesophases that formalizes the usual approach of direct visual observation,using the characteristic geometry of the structures. A cluster analysis algorithm is used to determine clusters of each component of the diblock, and the number and shape of the clusters can be used to determine the mesophase.We also employ methods from integral geometry for the identification of mesophases and compare their usefulness to the cluster analysis approach.To probe the properties of our model in confinement, we perform molecular dynamics simulations of atomistic polyethylene melts confined between graphite surfaces. The results from these simulations are used as an input for an iterative coarse-graining procedure that yields a surface interaction potential for the soft-tetramer model. Using the interaction potential derived in that way, we perform an initial study on the behavior of the soft-tetramer model in confinement. Comparing with experimental studies, we find that our model can reflect basic features of confined diblock copolymer melts.
Resumo:
Until few years ago, 3D modelling was a topic confined into a professional environment. Nowadays technological innovations, the 3D printer among all, have attracted novice users to this application field. This sudden breakthrough was not supported by adequate software solutions. The 3D editing tools currently available do not assist the non-expert user during the various stages of generation, interaction and manipulation of 3D virtual models. This is mainly due to the current paradigm that is largely supported by two-dimensional input/output devices and strongly affected by obvious geometrical constraints. We have identified three main phases that characterize the creation and management of 3D virtual models. We investigated these directions evaluating and simplifying the classic editing techniques in order to propose more natural and intuitive tools in a pure 3D modelling environment. In particular, we focused on freehand sketch-based modelling to create 3D virtual models, interaction and navigation in a 3D modelling environment and advanced editing tools for free-form deformation and objects composition. To pursuing these goals we wondered how new gesture-based interaction technologies can be successfully employed in a 3D modelling environments, how we could improve the depth perception and the interaction in 3D environments and which operations could be developed to simplify the classical virtual models editing paradigm. Our main aims were to propose a set of solutions with which a common user can realize an idea in a 3D virtual model, drawing in the air just as he would on paper. Moreover, we tried to use gestures and mid-air movements to explore and interact in 3D virtual environment, and we studied simple and effective 3D form transformations. The work was carried out adopting the discrete representation of the models, thanks to its intuitiveness, but especially because it is full of open challenges.
Resumo:
In condensed matter systems, the interfacial tension plays a central role for a multitude of phenomena. It is the driving force for nucleation processes, determines the shape and structure of crystalline structures and is important for industrial applications. Despite its importance, the interfacial tension is hard to determine in experiments and also in computer simulations. While for liquid-vapor interfacial tensions there exist sophisticated simulation methods to compute the interfacial tension, current methods for solid-liquid interfaces produce unsatisfactory results.rnrnAs a first approach to this topic, the influence of the interfacial tension on nuclei is studied within the three-dimensional Ising model. This model is well suited because despite its simplicity, one can learn much about nucleation of crystalline nuclei. Below the so-called roughening temperature, nuclei in the Ising model are not spherical anymore but become cubic because of the anisotropy of the interfacial tension. This is similar to crystalline nuclei, which are in general not spherical but more like a convex polyhedron with flat facets on the surface. In this context, the problem of distinguishing between the two bulk phases in the vicinity of the diffuse droplet surface is addressed. A new definition is found which correctly determines the volume of a droplet in a given configuration if compared to the volume predicted by simple macroscopic assumptions.rnrnTo compute the interfacial tension of solid-liquid interfaces, a new Monte Carlo method called ensemble switch method'' is presented which allows to compute the interfacial tension of liquid-vapor interfaces as well as solid-liquid interfaces with great accuracy. In the past, the dependence of the interfacial tension on the finite size and shape of the simulation box has often been neglected although there is a nontrivial dependence on the box dimensions. As a consequence, one needs to systematically increase the box size and extrapolate to infinite volume in order to accurately predict the interfacial tension. Therefore, a thorough finite-size scaling analysis is established in this thesis. Logarithmic corrections to the finite-size scaling are motivated and identified, which are of leading order and therefore must not be neglected. The astounding feature of these logarithmic corrections is that they do not depend at all on the model under consideration. Using the ensemble switch method, the validity of a finite-size scaling ansatz containing the aforementioned logarithmic corrections is carefully tested and confirmed. Combining the finite-size scaling theory with the ensemble switch method, the interfacial tension of several model systems, ranging from the Ising model to colloidal systems, is computed with great accuracy.
Resumo:
In questa tesi si è studiato l’insorgere di eventi critici in un semplice modello neurale del tipo Integrate and Fire, basato su processi dinamici stocastici markoviani definiti su una rete. Il segnale neurale elettrico è stato modellato da un flusso di particelle. Si è concentrata l’attenzione sulla fase transiente del sistema, cercando di identificare fenomeni simili alla sincronizzazione neurale, la quale può essere considerata un evento critico. Sono state studiate reti particolarmente semplici, trovando che il modello proposto ha la capacità di produrre effetti "a cascata" nell’attività neurale, dovuti a Self Organized Criticality (auto organizzazione del sistema in stati instabili); questi effetti non vengono invece osservati in Random Walks sulle stesse reti. Si è visto che un piccolo stimolo random è capace di generare nell’attività della rete delle fluttuazioni notevoli, in particolar modo se il sistema si trova in una fase al limite dell’equilibrio. I picchi di attività così rilevati sono stati interpretati come valanghe di segnale neurale, fenomeno riconducibile alla sincronizzazione.
Resumo:
Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.
Resumo:
We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
Releasing captive-bred fish into natural environments (stocking) is common in fisheries worldwide. Although stocking is believed to have a positive effect on fish abundance over the short term, little is known about the long-term consequences of recurrent stocking and its influence on natural populations. In fact, there are growing concerns that genetically maladapted captive-bred fish can eventually reduce the abundance of natural population. In this study, we develop a simple model to quantitatively investigate the condition under which recurrent stocking has long-term effects on the natural population. Using a population dynamics model that takes into account a density-dependent recruitment, a gene responsible for the fitness difference between wild and captive-bred fish, and hybridization between them, we show that there is little or no contribution of recurrent stocking to the stock enhancement without a replacement of the wild gene pool by the captive-bred gene pool. The model further predicted that stocking of an intermediate level causes a reduction, rather than enhancement, of population size over the long term. The population decline due to stocking was attributed to the fitness disadvantage of captive-bred fish and strong overcompensation at recruitment stage. These results suggest that it would be difficult to simultaneously attain population size recovery and conservation of the local gene pool when captive-bred fish have fitness disadvantage in the wild, although caution is needed when applying the predictions from the simplified model to a specific species or population.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
Acute stress reactions (ASR) and postpartum depressive symptoms (PDS) are frequent after childbirth. The present study addresses the change and overlap of ASR and PDS from the 1- to 3-week postpartum and examines the interplay of caregiver support and subjective birth experience with regard to the development of ASR/PDS within a longitudinal path model.
Resumo:
Neurons generate spikes reliably with millisecond precision if driven by a fluctuating current--is it then possible to predict the spike timing knowing the input? We determined parameters of an adapting threshold model using data recorded in vitro from 24 layer 5 pyramidal neurons from rat somatosensory cortex, stimulated intracellularly by a fluctuating current simulating synaptic bombardment in vivo. The model generates output spikes whenever the membrane voltage (a filtered version of the input current) reaches a dynamic threshold. We find that for input currents with large fluctuation amplitude, up to 75% of the spike times can be predicted with a precision of +/-2 ms. Some of the intrinsic neuronal unreliability can be accounted for by a noisy threshold mechanism. Our results suggest that, under random current injection into the soma, (i) neuronal behavior in the subthreshold regime can be well approximated by a simple linear filter; and (ii) most of the nonlinearities are captured by a simple threshold process.
Resumo:
We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.
Resumo:
In Malani and Neilsen (1992) we have proposed alternative estimates of survival function (for time to disease) using a simple marker that describes time to some intermediate stage in a disease process. In this paper we derive the asymptotic variance of one such proposed estimator using two different methods and compare terms of order 1/n when there is no censoring. In the absence of censoring the asymptotic variance obtained using the Greenwood type approach converges to exact variance up to terms involving 1/n. But the asymptotic variance obtained using the theory of the counting process and results from Voelkel and Crowley (1984) on semi-Markov processes has a different term of order 1/n. It is not clear to us at this point why the variance formulae using the latter approach give different results.