987 resultados para MACROSCOPIC QUANTUM PHENOMENA IN MAGNETIC SYSTEMS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cooper, J. & Urquhart, C. (2004). Confidentiality issues in information systems in social care. In K. Grant, D.A. Edgar & M. Jordan (Eds.), Reflections on the past, making sense of today and predicting the future of information systems, 9th annual UKAIS (UK Academy of Information Systems) conference proceedings, Annual conference, 5-7 May 2004, Glasgow Caledonian University (CD-ROM). Glasgow: Glasgow Caledonian University for UKAIS Sponsorship: AHRC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wilson, M.S. and Neal, M.J., 'Diminishing Returns of Engineering Effort in Telerobotic Systems', IEEE Transactions on Systems, Man and Cybernetics - Part A:Systems and Humans, 2001, September, volume 31, number 5, pp 459-465, IEEE Robotics and Automation Society, ed. Dautenhahn,K., Special Issue on Socially Intelligent Agents - The Human in the Loop

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We leverage the buffering capabilities of end-systems to achieve scalable, asynchronous delivery of streams in a peer-to-peer environment. Unlike existing cache-and-relay schemes, we propose a distributed prefetching protocol where peers prefetch and store portions of the streaming media ahead of their playout time, thus not only turning themselves to possible sources for other peers but their prefetched data can allow them to overcome the departure of their source-peer. This stands in sharp contrast to existing cache-and-relay schemes where the departure of the source-peer forces its peer children to go the original server, thus disrupting their service and increasing server and network load. Through mathematical analysis and simulations, we show the effectiveness of maintaining such asynchronous multicasts from several source-peers to other children peers, and the efficacy of prefetching in the face of peer departures. We confirm the scalability of our dPAM protocol as it is shown to significantly reduce server load.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the implications of the effectuation concept for socio-technical artifact design as part of the design science research (DSR) process in information systems (IS). Effectuation logic is the opposite of causal logic. Ef-fectuation does not focus on causes to achieve a particular effect, but on the possibilities that can be achieved with extant means and resources. Viewing so-cio-technical IS DSR through an effectuation lens highlights the possibility to design the future even without set goals. We suggest that effectuation may be a useful perspective for design in dynamic social contexts leading to a more dif-ferentiated view on the instantiation of mid-range artifacts for specific local ap-plication contexts. Design science researchers can draw on this paper’s conclu-sions to view their DSR projects through a fresh lens and to reexamine their re-search design and execution. The paper also offers avenues for future research to develop more concrete application possibilities of effectuation in socio-technical IS DSR and, thus, enrich the discourse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the Internet has changed communication, commerce, and the distribution of information, so it is changing Information Systems Research (ISR). The goal of this paper is to put the topic of application and reliability of online research into the focus of ISR by exploring the extension of online research methods (ORM) into its popular publication outlets. 513 articles from high ranked ISR publication outlets from the last decade have been analyzed using online content analysis. The findings show that in ISR online research methods are applied despite the missing discussion on the validity of the theories and methods that were defined offline within the new environment and the associated challenges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: The purpose of this work is to improve the noise power spectrum (NPS), and thus the detective quantum efficiency (DQE), of computed radiography (CR) images by correcting for spatial gain variations specific to individual imaging plates. CR devices have not traditionally employed gain-map corrections, unlike the case with flat-panel detectors, because of the multiplicity of plates used with each reader. The lack of gain-map correction has limited the DQE(f) at higher exposures with CR. This current work describes a feasible solution to generating plate-specific gain maps. METHODS: Ten high-exposure open field images were taken with an RQA5 spectrum, using a sixth generation CR plate suspended in air without a cassette. Image values were converted to exposure, the plates registered using fiducial dots on the plate, the ten images averaged, and then high-pass filtered to remove low frequency contributions from field inhomogeneity. A gain-map was then produced by converting all pixel values in the average into fractions with mean of one. The resultant gain-map of the plate was used to normalize subsequent single images to correct for spatial gain fluctuation. To validate performance, the normalized NPS (NNPS) for all images was calculated both with and without the gain-map correction. Variations in the quality of correction due to exposure levels, beam voltage/spectrum, CR reader used, and registration were investigated. RESULTS: The NNPS with plate-specific gain-map correction showed improvement over the noncorrected case over the range of frequencies from 0.15 to 2.5 mm(-1). At high exposure (40 mR), NNPS was 50%-90% better with gain-map correction than without. A small further improvement in NNPS was seen from carefully registering the gain-map with subsequent images using small fiducial dots, because of slight misregistration during scanning. Further improvement was seen in the NNPS from scaling the gain map about the mean to account for different beam spectra. CONCLUSIONS: This study demonstrates that a simple gain-map can be used to correct for the fixed-pattern noise in a given plate and thus improve the DQE of CR imaging. Such a method could easily be implemented by manufacturers because each plate has a unique bar code and the gain-map for all plates associated with a reader could be stored for future retrieval. These experiments indicated that an improvement in NPS (and hence, DQE) is possible, depending on exposure level, over a wide range of frequencies with this technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The binary A(8)B phase (prototype Pt(8)Ti) has been experimentally observed in 11 systems. A high-throughput search over all the binary transition intermetallics, however, reveals 59 occurrences of the A(8)B phase: Au(8)Zn(dagger), Cd(8)Sc(dagger), Cu(8)Ni(dagger), Cu(8)Zn(dagger), Hg(8)La, Ir(8)Os(dagger), Ir(8)Re, Ir(8)Ru(dagger), Ir(8)Tc, Ir(8)W(dagger), Nb(8)Os(dagger), Nb(8)Rh(dagger), Nb(8)Ru(dagger), Nb(8)Ta(dagger), Ni(8)Fe, Ni(8)Mo(dagger)*, Ni(8)Nb(dagger)*, Ni(8)Ta*, Ni(8)V*, Ni(8)W, Pd(8)Al(dagger), Pd(8)Fe, Pd(8)Hf, Pd(8)Mn, Pd(8)Mo*, Pd(8)Nb, Pd(8)Sc, Pd(8)Ta, Pd(8)Ti, Pd(8)V*, Pd(8)W*, Pd(8)Zn, Pd(8)Zr, Pt(8)Al(dagger), Pt(8)Cr*, Pt(8)Hf, Pt(8)Mn, Pt(8)Mo, Pt(8)Nb, Pt(8)Rh(dagger), Pt(8)Sc, Pt(8)Ta, Pt(8)Ti*, Pt(8)V*, Pt(8)W, Pt(8)Zr*, Rh(8)Mo, Rh(8)W, Ta(8)Pd, Ta(8)Pt, Ta(8)Rh, V(8)Cr(dagger), V(8)Fe(dagger), V(8)Ir(dagger), V(8)Ni(dagger), V(8)Pd, V(8)Pt, V(8)Rh, and V(8)Ru(dagger) ((dagger) = metastable, * = experimentally observed). This is surprising for the wealth of new occurrences that are predicted, especially in well-characterized systems (e.g., Cu-Zn). By verifying all experimental results while offering additional predictions, our study serves as a striking demonstration of the power of the high-throughput approach. The practicality of the method is demonstrated in the Rh-W system. A cluster-expansion-based Monte Carlo model reveals a relatively high order-disorder transition temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Localized molecular orbitals (LMOs) are much more compact representations of electronic degrees of freedom than canonical molecular orbitals (CMOs). The most compact representation is provided by nonorthogonal localized molecular orbitals (NOLMOs), which are linearly independent but are not orthogonal. Both LMOs and NOLMOs are thus useful for linear-scaling calculations of electronic structures for large systems. Recently, NOLMOs have been successfully applied to linear-scaling calculations with density functional theory (DFT) and to reformulating time-dependent density functional theory (TDDFT) for calculations of excited states and spectroscopy. However, a challenge remains as NOLMO construction from CMOs is still inefficient for large systems. In this work, we develop an efficient method to accelerate the NOLMO construction by using predefined centroids of the NOLMO and thereby removing the nonlinear equality constraints in the original method ( J. Chem. Phys. 2004 , 120 , 9458 and J. Chem. Phys. 2000 , 112 , 4 ). Thus, NOLMO construction becomes an unconstrained optimization. Its efficiency is demonstrated for the selected saturated and conjugated molecules. Our method for fast NOLMO construction should lead to efficient DFT and NOLMO-TDDFT applications to large systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Fermi gas of atoms with resonant interactions is predicted to obey universal hydrodynamics, in which the shear viscosity and other transport coefficients are universal functions of the density and temperature. At low temperatures, the viscosity has a universal quantum scale ħ n, where n is the density and ħ is Planck's constant h divided by 2π, whereas at high temperatures the natural scale is p(T)(3)/ħ(2), where p(T) is the thermal momentum. We used breathing mode damping to measure the shear viscosity at low temperature. At high temperature T, we used anisotropic expansion of the cloud to find the viscosity, which exhibits precise T(3/2) scaling. In both experiments, universal hydrodynamic equations including friction and heating were used to extract the viscosity. We estimate the ratio of the shear viscosity to the entropy density and compare it with that of a perfect fluid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A 3D model of melt pool created by a moving arc type heat sources has been developed. The model solves the equations of turbulent fluid flow, heat transfer and electromagnetic field to demonstrate the flow behaviour phase-change in the pool. The coupled effects of buoyancy, capillary (Marangoni) and electromagnetic (Lorentz) forces are included within an unstructured finite volume mesh environment. The movement of the welding arc along the workpiece is accomplished via a moving co-ordinator system. Additionally a method enabling movement of the weld pool surface by fluid convection is presented whereby the mesh in the liquid region is allowed to move through a free surface. The surface grid lines move to restore equilibrium at the end of each computational time step and interior grid points then adjust following the solution of a Laplace equation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational Fluid Dynamics (CFD) is gradually becoming a powerful and almost essential tool for the design, development and optimization of engineering applications. However the mathematical modelling of the erratic turbulent motion remains the key issue when tackling such flow phenomena. The reliability of CFD analysis depends heavily on the turbulence model employed together with the wall functions implemented. In order to resolve the abrupt changes in the turbulent energy and other parameters situated at near wall regions a particularly fine mesh is necessary which inevitably increases the computer storage and run-time requirements. Turbulence modelling can be considered to be one of the three key elements in CFD. Precise mathematical theories have evolved for the other two key elements, grid generation and algorithm development. The principal objective of turbulence modelling is to enhance computational procedures of efficient accuracy to reproduce the main structures of three dimensional fluid flows. The flow within an electronic system can be characterized as being in a transitional state due to the low velocities and relatively small dimensions encountered. This paper presents simulated CFD results for an investigation into the predictive capability of turbulence models when considering both fluid flow and heat transfer phenomena. Also a new two-layer hybrid kε / kl turbulence model for electronic application areas will be presented which holds the advantages of being cheap in terms of the computational mesh required and is also economical with regards to run-time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes recent developments made to the stress analysis module within FLOTHERM, extending its capability to handle viscoplastic behavior. It also presents the validation of this approach and results obtained for an SMT resistor as an illustrative example. Lifetime predictions are made using the creep strain energy based models of Darveaux. Comment is made about the applicability of the damage model to the geometry of the joint under study.