989 resultados para Ideal (model)
Resumo:
Even though the Standard Model with a Higgs mass mH = 125GeV possesses no bulk phase transition, its thermodynamics still experiences a "soft point" at temperatures around T = 160GeV, with a deviation from ideal gas thermodynamics. Such a deviation may have an effect on precision computations of weakly interacting dark matter relic abundances if their mass is in the few TeV range, or on leptogenesis scenarios operating in this temperature range. By making use of results from lattice simulations based on a dimensionally reduced effective field theory, we estimate the relevant thermodynamic functions across the crossover. The results are tabulated in a numerical form permitting for their insertion as a background equation of state into cosmological particle production/decoupling codes. We find that Higgs dynamics induces a non-trivial "structure" visible e.g. in the heat capacity, but that in general the largest radiative corrections originate from QCD effects, reducing the energy density by a couple of percent from the free value even at T > 160GeV.
Resumo:
This study provides a theoretical assessment of the potential bias due to differential lateral transport on multi-proxy studies based on a range of marine microfossils. Microfossils preserved in marine sediments are at the centre of numerous proxies for paleoenvironmental reconstructions. The precision of proxies is based on the assumption that they accurately represent the overlying watercolumn properties and faunas. Here we assess the possibility of a syn-depositional bias in sediment assemblages caused by horizontal drift in the water column, due to differential settling velocities of sedimenting particles based on their shape, size and density, and due to differences in current velocities. Specifically we calculate the post-mortem lateral transport undergone by planktic foraminifera and a range of other biological proxy carriers (diatoms, radiolaria and fecal pellets transporting coccolithophores) in several regions with high current velocities. We find that lateral transport of different planktic foraminiferal species is minimal due to high settling velocities. No significant shape- or size-dependent sorting occurs before reaching the sediment, making planktic foraminiferal ideal proxy carriers. In contrast, diatoms, radiolaria and fecal pellets can be transported up to 500km in some areas. For example in the Agulhas current, transport can lead to differences of up to 2°C in temperature reconstructions between different proxies in response to settling velocities. Therefore, sediment samples are likely to contain different proportions of local and imported particles, decreasing the precision of proxies based on these groups and the accuracy of the temperature reconstruction.
Resumo:
Calcareous nannoplankton assemblages and benthic d18O isotopes of Pliocene deep-sea sediments of ODP site 1172 (East of Tasmania) have been studied to improve our knowledge of the Southern Ocean paleoceanography. Our study site is located just north of the Subtropical Front (STF), an ideal setting to monitor migrations of the STF during our study period, between 3.45 and 2.45 Ma. The assemblage identified at ODP site 1172 has been interpreted as characteristic for the transitional zone water mass, located south of the STF, based on: (i) the low abundances (< 1%) of subtropical taxa, (ii) relatively high percentages of Coccolithus pelagicus, a subpolar type species, (iii) abundances from 2-10% of Calcidiscus leptoporus, a species that frequently inhabits the zone south of the STF and (iv) the high abundances of small Noelaerhabdaceae which at present dominates the zone south of the STF. Across our interval the calcareous nannoplankton manifests glacial-interglacial variability. We have identified cold events, characterized by high abundances of C. pelagicus which coincide with glacial periods, except during G7. After 3.1 Ma cold events are more frequent, in concordance with global cooling trends. Around 2.75 Ma, the interglacial stage G7 is characterized by anomalous low temperatures which most likely are linked to definite closure of the Central American Seaway (CAS), an event that is believed to have had global consequences. A gradual increase of very small Reticulofenestra across our section marks a significant trend in the small Noelaerhabdaceae species group and has been linked to a general enhanced mixing of the water column in agreement with previous studies. It is suggested that a rapid decline of small Gephyrocapsa after isotopic stage G7 might be related to the cooling observed in our study site after the closure of the CAS.
Resumo:
This paper analyzes the noise and gain measurement of microwave differential amplifiers using two passive baluns. A general model of the baluns is considered, including potential losses and phase/amplitude unbalances. This analysis allows de-embedding the actual gain and noise performance of the isolated amplifier by using single-ended measurements of the cascaded system and baluns. Finally, measured results from two amplifier prototypes are used to validate the theoretical principles.
Resumo:
Performance studies of actual parallel systems usually tend to concéntrate on the effectiveness of a given implementation. This is often done in the absolute, without quantitave reference to the potential parallelism contained in the programs from the point of view of the execution paradigm. We feel that studying the parallelism inherent to the programs is interesting, as it gives information about the best possible behavior of any implementation and thus allows contrasting the results obtained. We propose a method for obtaining ideal speedups for programs through a combination of sequential or parallel execution and simulation, and the algorithms that allow implementing the method. Our approach is novel and, we argüe, more accurate than previously proposed methods, in that a crucial part of the data - the execution times of tasks - is obtained from actual executions, while speedup is computed by simulation. This allows obtaining speedup (and other) data under controlled and ideal assumptions regarding issues such as number of processor, scheduling algorithm and overheads, etc. The results obtained can be used for example to evalúate the ideal parallelism that a program contains for a given model of execution and to compare such "perfect" parallelism to that obtained by a given implementation of that model. We also present a tool, IDRA, which implements the proposed method, and results obtained with IDRA for benchmark programs, which are then compared with those obtained in actual executions on real parallel systems.
Resumo:
End-user development (EUD) is much hyped, and its impact has outstripped even the most optimistic forecasts. Even so, the vision of end users programming their own solutions has not yet materialized. This will continue to be so unless we in both industry and the research community set ourselves the ambitious challenge of devising end to end an end-user application development model for developing a new age of EUD tools. We have embarked on this venture, and this paper presents the main insights and outcomes of our research and development efforts as part of a number of successful EU research projects. Our proposal not only aims to reshape software engineering to meet the needs of EUD but also to refashion its components as solution building blocks instead of programs and software developments. This way, end users will really be empowered to build solutions based on artefacts akin to their expertise and understanding of ideal solutions
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
Structure from Motion (SfM) is a new form of photogrammetry that automates the rendering of georeferenced 3D models of objects using digital photographs and independently surveyed Ground Control Points (GCPs). This project seeks to quantify the error found in Digital Elevation Models (DEMs) produced using SfM. I modeled a rockslide found at the Cadman Quarry (Monroe, Washington) because the surface is vegetation-free, which is ideal for SfM and Terrestrial LiDAR Scanner (TLS) surveys. By using SfM, TLS, and GPS positioning at the same time, I attempted to find the deviation in the SfM model from the TLS model and GPS points. Using the deviation, I found the Root-Mean-Square Error (RMSE) between the SfM DEM and GPS positions. The RMSE of the SfM model when compared to surveyed GPS points is 17cm. I propagated the uncertainty of the GPS points with the RMSE of the SfM model to find the uncertainty of the SfM model compared to the NAD 1984 datum. The uncertainty of the SfM model compared to the NAD 1984 is 27cm. This study did not produce a model from the TLS that had sufficient resolution on horizontal surfaces to compare to surveyed GPS points.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
The Virtual Learning Environment (VLE) is one of the fastest growing areas in educational technology research and development. In order to achieve learning effectiveness, ideal VLEs should be able to identify learning needs and customize solutions, with or without an instructor to supplement instruction. They are called Personalized VLEs (PVLEs). In order to achieve PVLEs success, comprehensive conceptual models corresponding to PVLEs are essential. Such conceptual modeling development is important because it facilitates early detection and correction of system development errors. Therefore, in order to capture the PVLEs knowledge explicitly, this paper focuses on the development of conceptual models for PVLEs, including models of knowledge primitives in terms of learner, curriculum, and situational models, models of VLEs in general pedagogical bases, and particularly, the definition of the ontology of PVLEs on the constructivist pedagogical principle. Based on those comprehensive conceptual models, a prototyped multiagent-based PVLE has been implemented. A field experiment was conducted to investigate the learning achievements by comparing personalized and non-personalized systems. The result indicates that the PVLE we developed under our comprehensive ontology successfully provides significant learning achievements. These comprehensive models also provide a solid knowledge representation framework for PVLEs development practice, guiding the analysis, design, and development of PVLEs. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In the traditional TOPSIS, the ideal solutions are assumed to be located at the endpoints of the data interval. However, not all performance attributes possess ideal values at the endpoints. We termed performance attributes that have ideal values at extreme points as Type-1 attributes. Type-2 attributes however possess ideal values somewhere within the data interval instead of being at the extreme end points. This provides a preference ranking problem when all attributes are computed and assumed to be of the Type-1 nature. To overcome this issue, we propose a new Fuzzy DEA method for computing the ideal values and distance function of Type-2 attributes in a TOPSIS methodology. Our method allows Type-1 and Type-2 attributes to be included in an evaluation system without compromising the ranking quality. The efficacy of the proposed model is illustrated with a vendor evaluation case for a high-tech investment decision making exercise. A comparison analysis with the traditional TOPSIS is also presented. © 2012 Springer Science+Business Media B.V.
Resumo:
The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage. © 2014 ARVO.
Resumo:
Batch-mode reverse osmosis (batch-RO) operation is considered a promising desalination method due to its low energy requirement compared to other RO system arrangements. To improve and predict batch-RO performance, studies on concentration polarization (CP) are carried out. The Kimura-Sourirajan mass-transfer model is applied and validated by experimentation with two different spiral-wound RO elements. Explicit analytical Sherwood correlations are derived based on experimental results. For batch-RO operation, a new genetic algorithm method is developed to estimate the Sherwood correlation parameters, taking into account the effects of variation in operating parameters. Analytical procedures are presented, then the mass transfer coefficient models are developed for different operation processes, i.e., batch-RO and continuous RO. The CP related energy loss in batch-RO operation is quantified based on the resulting relationship between feed flow rates and mass transfer coefficients. It is found that CP increases energy consumption in batch-RO by about 25% compared to the ideal case in which CP is absent. For continuous RO process, the derived Sherwood correlation predicted CP accurately. In addition, we determined the optimum feed flow rate of our batch-RO system.
Resumo:
We present an analytical model for describing complex dynamics of a hybrid system consisting of resonantly coupled classical resonator and quantum structures. Classical resonators in our model correspond to plasmonic metamaterials of various geometries, as well as other types of nano- and microstructure, the optical responses of which can be described classically. Quantum resonators are represented by atoms or molecules, or their aggregates (for example, quantum dots, carbon nanotubes, dye molecules, polymer or bio-molecules etc), which can be accurately modelled only with the use of the quantum mechanical approach. Our model is based on the set of equations that combines well established density matrix formalism appropriate for quantum systems, coupled with harmonic-oscillator equations ideal for modelling sub-wavelength plasmonic and optical resonators. As a particular example of application of our model, we show that the saturation nonlinearity of carbon nanotubes increases multifold in the resonantly enhanced near field of a metamaterial. In the framework of our model, we discuss the effect of inhomogeneity of the carbon-nanotube layer (bandgap value distribution) on the nonlinearity enhancement. © 2012 IOP Publishing Ltd.
Resumo:
This thesis deals with the evaporation of non-ideal liquid mixtures using a multicomponent mass transfer approach. It develops the concept of evaporation maps as a convenient way of representing the dynamic composition changes of ternary mixtures during an evaporation process. Evaporation maps represent the residual composition of evaporating ternary non-ideal mixtures over the full range of composition, and are analogous to the commonly-used residue curve maps of simple distillation processes. The evaporation process initially considered in this work involves gas-phase limited evaporation from a liquid or wetted-solid surface, over which a gas flows at known conditions. Evaporation may occur into a pure inert gas, or into one pre-loaded with a known fraction of one of the ternary components. To explore multicomponent masstransfer effects, a model is developed that uses an exact solution to the Maxwell-Stefan equations for mass transfer in the gas film, with a lumped approach applied to the liquid phase. Solutions to the evaporation model take the form of trajectories in temperaturecomposition space, which are then projected onto a ternary diagram to form the map. Novel algorithms are developed for computation of pseudo-azeotropes in the evaporating mixture, and for calculation of the multicomponent wet-bulb temperature at a given liquid composition. A numerical continuation method is used to track the bifurcations which occur in the evaporation maps, where the composition of one component of the pre-loaded gas is the bifurcation parameter. The bifurcation diagrams can in principle be used to determine the required gas composition to produce a specific terminal composition in the liquid. A simple homotopy method is developed to track the locations of the various possible pseudo-azeotropes in the mixture. The stability of pseudo-azeotropes in the gas-phase limited case is examined using a linearized analysis of the governing equations. Algorithms for the calculation of separation boundaries in the evaporation maps are developed using an optimization-based method, as well as a method employing eigenvectors derived from the linearized analysis. The flexure of the wet-bulb temperature surface is explored, and it is shown how evaporation trajectories cross ridges and valleys, so that ridges and valleys of the surface do not coincide with separation boundaries. Finally, the assumption of gas-phase limited mass transfer is relaxed, by employing a model that includes diffusion in the liquid phase. A finite-volume method is used to solve the system of partial differential equations that results. The evaporation trajectories for the distributed model reduce to those of the lumped (gas-phase limited) model as the diffusivity in the liquid increases; under the same gas-phase conditions the permissible terminal compositions of the distributed and lumped models are the same.