987 resultados para theoretical methods
Resumo:
The construction industry is characterised by fragmentation and suffers from lack of collaboration, often adopting adversarial working practices to achieve deliverables. For the UK Government and construction industry, BIM is a game changer aiming to rectify this fragmentation and promote collaboration. However it has become clear that there is an essential need to have better controls and definitions of both data deliverables and data classification. Traditional methods and techniques for collating and inputting data have shown to be time consuming and provide little to improve or add value to the overall task of improving deliverables. Hence arose the need in the industry to develop a Digital Plan of Work (DPoW) toolkit that would aid the decision making process, providing the required control over the project workflows and data deliverables, and enabling better collaboration through transparency of need and delivery. The specification for the existing Digital Plan of Work (DPoW) was to be, an industry standard method of describing geometric, requirements and data deliveries at key stages of the project cycle, with the addition of a structured and standardised information classification system. However surveys and interviews conducted within this research indicate that the current DPoW resembles a digitised version of the pre-existing plans of work and does not push towards the data enriched decision-making abilities that advancements in technology now offer. A Digital Framework is not simply the digitisation of current or historic standard methods and procedures, it is a new intelligent driven digital system that uses new tools, processes, procedures and work flows to eradicate waste and increase efficiency. In addition to reporting on conducted surveys above, this research paper will present a theoretical investigation into usage of Intelligent Decision Support Systems within a digital plan of work framework. Furthermore this paper will present findings on the suitability to utilise advancements in intelligent decision-making system frameworks and Artificial Intelligence for a UK BIM Framework. This should form the foundations of decision-making for projects implemented at BIM level 2. The gap identified in this paper is that the current digital toolkit does not incorporate the intelligent characteristics available in other industries through advancements in technology and collation of vast amounts of data that a digital plan of work framework could have access to and begin to develop, learn and adapt for decision-making through the live interaction of project stakeholders.
Resumo:
Includes tables.
Resumo:
"Although this manual is intended primarily as the practical companion to Professor A. P. Mathews' textbook, nevertheless it contains considerable explanatory matter in order to help correlate the theoretical and laboratory aspects of the subject matter."--Pref.
Resumo:
"Suggestions for further reading": v. 1, p. 669-683; v. 2, p. 565-578.
Resumo:
The C-13 NMR data of five iminopropadienones R-N=C=C=C=O as well as carbon suboxide, C3O2, have been examined theoretically and experimentally. The best theoretical results were obtained using the GIAO/B3LYP/6-31 +G**//MP2/6-31G* level of theory, which reproduces the chemical shifts of the iminopropadienone substituents extremely well while underestimating those of the cumulenic carbons by 5-10 ppm. The computationally faster GIAO/HF/6-31 + G**//B3LYP/6-31 G* level is also adequate. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We introduce a new class of quantum Monte Carlo methods, based on a Gaussian quantum operator representation of fermionic states. The methods enable first-principles dynamical or equilibrium calculations in many-body Fermi systems, and, combined with the existing Gaussian representation for bosons, provide a unified method of simulating Bose-Fermi systems. As an application relevant to the Fermi sign problem, we calculate finite-temperature properties of the two dimensional Hubbard model and the dynamics in a simple model of coherent molecular dissociation.
Resumo:
Single-phase Ba(Cd1/3Ta2/3)O-3 powder was produced using conventional solid state reaction methods. Ba(Cd1/3Ta2/3)O-3 ceramics with 2 wt % ZnO as sintering additive sintered at 1550 degreesC exhibited a dielectric constant of similar to32 and loss tangent of 5x10(-5) at 2 GHz. X-ray diffraction and thermogravimetric measurements were used to characterize the structural and thermodynamic properties of the material. Ab initio electronic structure calculations were used to give insight into the unusual properties of Ba(Cd1/3Ta2/3)O-3, as well as a similar and more widely used material Ba(Zn1/3Ta2/3)O-3. While both compounds have a hexagonal Bravais lattice, the P321 space group of Ba(Cd1/3Ta2/3)O-3 is reduced from P (3) under bar m1 of Ba(Zn1/3Ta2/3)O-3 as a result of a distortion of oxygen away from the symmetric position between the Ta and Cd ions. Both of the compounds have a conduction band minimum and valence band maximum composed of mostly weakly itinerant Ta 5d and Zn 3d/Cd 4d levels, respectively. The covalent nature of the directional d-electron bonding in these high-Z oxides plays an important role in producing a more rigid lattice with higher melting points and enhanced phonon energies, and is suggested to play an important role in producing materials with a high dielectric constant and low microwave loss. (C) 2005 American Institute of Physics.
Resumo:
In a recent study, severe distortions in the proton images of an excised, fixed, human brain in an 11.1 Tesla/40 cm MR instrument have been observed, and the effect modeled on phantom images using a finite difference time domain (FDTD) model. in the present study, we extend these simulations to that of a complete human head, employing a hybrid FDTD and method of moments (MoM) approach, which provides a validated method for simulating biological samples in coil structures. The effect of fixative on the image distortions is explored. importantly, temperature distributions within the head are also simulated using a bioheat method based on parameters derived from the electromagnetic simulations. The MoM/FDTD simulations confirm that the transverse magnetic field (B,) from a ReCav resonator exhibits good homogeneity in air but strong inhomogeneity when loaded with the head with or without fixative. The fixative serves to increase the distortions, but they are still significant for the in vivo simulations. The simulated signal intensity (SI) distribution within the sample confirm the distortions in the experimental images are caused by the complex interactions of the incident electromagnetic fields with tissue, which is heterogeneous in terms of conductivity and permittivity. The temperature distribution is likewise heterogeneous, raising concerns regarding hot spot generation in the sample that may exceed acceptable levels in future in vivo studies. As human imaging at 11.1 T is some time away, simulations are important in terms of predicting potential safety issues as well as evaluating practical concerns about the quality of images. Simulation on a whole human head at 11.1 T implies the wave behavior presents significant engineering challenges for ultra-high-field (UHF) MRI. Novel strategies will have to be employed in imaging technique and resonator design for UHF MRI to achieve the theoretical signal-to-noise ratio (SNR) improvements it offers over lower field systems. (C) 2005 Wiley Periodicals, Inc.
Resumo:
Purpose - In many scientific and engineering fields, large-scale heat transfer problems with temperature-dependent pore-fluid densities are commonly encountered. For example, heat transfer from the mantle into the upper crust of the Earth is a typical problem of them. The main purpose of this paper is to develop and present a new combined methodology to solve large-scale heat transfer problems with temperature-dependent pore-fluid densities in the lithosphere and crust scales. Design/methodology/approach - The theoretical approach is used to determine the thickness and the related thermal boundary conditions of the continental crust on the lithospheric scale, so that some important information can be provided accurately for establishing a numerical model of the crustal scale. The numerical approach is then used to simulate the detailed structures and complicated geometries of the continental crust on the crustal scale. The main advantage in using the proposed combination method of the theoretical and numerical approaches is that if the thermal distribution in the crust is of the primary interest, the use of a reasonable numerical model on the crustal scale can result in a significant reduction in computer efforts. Findings - From the ore body formation and mineralization points of view, the present analytical and numerical solutions have demonstrated that the conductive-and-advective lithosphere with variable pore-fluid density is the most favorite lithosphere because it may result in the thinnest lithosphere so that the temperature at the near surface of the crust can be hot enough to generate the shallow ore deposits there. The upward throughflow (i.e. mantle mass flux) can have a significant effect on the thermal structure within the lithosphere. In addition, the emplacement of hot materials from the mantle may further reduce the thickness of the lithosphere. Originality/value - The present analytical solutions can be used to: validate numerical methods for solving large-scale heat transfer problems; provide correct thermal boundary conditions for numerically solving ore body formation and mineralization problems on the crustal scale; and investigate the fundamental issues related to thermal distributions within the lithosphere. The proposed finite element analysis can be effectively used to consider the geometrical and material complexities of large-scale heat transfer problems with temperature-dependent fluid densities.
Resumo:
Land-surface processes include a broad class of models that operate at a landscape scale. Current modelling approaches tend to be specialised towards one type of process, yet it is the interaction of processes that is increasing seen as important to obtain a more integrated approach to land management. This paper presents a technique and a tool that may be applied generically to landscape processes. The technique tracks moving interfaces across landscapes for processes such as water flow, biochemical diffusion, and plant dispersal. Its theoretical development applies a Lagrangian approach to motion over a Eulerian grid space by tracking quantities across a landscape as an evolving front. An algorithm for this technique, called level set method, is implemented in a geographical information system (GIS). It fits with a field data model in GIS and is implemented as operators in map algebra. The paper describes an implementation of the level set methods in a map algebra programming language, called MapScript, and gives example program scripts for applications in ecology and hydrology.
Resumo:
Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.
Resumo:
The n-tuple recognition method is briefly reviewed, summarizing the main theoretical results. Large-scale experiments carried out on Stat-Log project datasets confirm this method as a viable competitor to more popular methods due to its speed, simplicity, and accuracy on the majority of a wide variety of classification problems. A further investigation into the failure of the method on certain datasets finds the problem to be largely due to a mismatch between the scales which describe generalization and data sparseness.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
Purpose - The purpose of this paper is to examine consumer emotions and the social science and observation measures that can be utilised to capture the emotional experiences of consumers. The paper is not setting out to solve the theoretical debate surrounding emotion research, rather to provide an assessment of methodological options available to researchers to aid their investigation into both the structure and content of the consumer emotional experience, acknowledging both the conscious and subconscious elements of that experience. Design/methodology/approach - A review of a wide range of prior research from the fields of marketing, consumer behaviour, psychology and neuroscience are examined to identify the different observation methods available to marketing researchers in the study of consumer emotion. This review also considers the self report measures available to researchers and identifies the main theoretical debates concerning emotion to provide a comprehensive overview of the issues surrounding the capture of emotional responses in a marketing context and to highlight the benefits that observation methods offer this area of research. Findings - This paper evaluates three observation methods and four widely used self report measures of emotion used in a marketing context. Whilst it is recognised that marketers have shown preference for the use of self report measures in prior research, mainly due to ease of implementation, it is posited that the benefits of observation methodology and the wealth of data that can be obtained using such methods can compliment prior research. In addition, the use of observation methods cannot only enhance our understanding of the consumer emotion experience but also enable us to collaborate with researchers from other fields in order to make progress in understanding emotion. Originality/value - This paper brings perspectives and methods together to provide an up to date consideration of emotion research for marketers. In order to generate valuable research in this area there is an identified need for discussion and implementation of the observation techniques available to marketing researchers working in this field. An evaluation of a variety of methods is undertaken as a point to start discussion or consideration of different observation techniques and how they can be utilised.