942 resultados para logical structure method
Resumo:
"A compendious history of anatomy" and "The Ruyschian art and method of making preparations to exhibit the structure of the human body" (32 p. at front of v. 1) are by Robert Hooper, and are reprinted, with slight changes in text, from his The anatomist's vade-mecum, 4th ed., London, 1802.
Resumo:
Objective: Expectancies about the outcomes of alcohol consumption are widely accepted as important determinants of drinking. This construct is increasingly recognized as a significant element of psychological interventions for alcohol-related problems. Much effort has been invested in producing reliable and valid instruments to measure this construct for research and clinical purposes, but very few have had their factor structure subjected to adequate validation. Among them, the Drinking Expectancies Questionnaire (DEQ) was developed to address some theoretical and design issues with earlier expectancy scales. Exploratory factor analyses, in addition to validity and reliability analyses, were performed when the original questionnaire was developed. The object of this study was to undertake a confirmatory analysis of the factor structure of the DEQ. Method: Confirmatory factor analysis through LISREL 8 was performed using a randomly split sample of 679 drinkers. Results: Results suggested that a new 5-factor model, which differs slightly from the original 6-factor version, was a more robust measure of expectancies. A new method of scoring the DEQ consistent with this factor structure is presented. Conclusions: The present study shows more robust psychometric properties of the DEQ using the new factor structure.
Resumo:
Hierarchical knowledge structures are frequently used within clinical decision support systems as part of the model for generating intelligent advice. The nodes in the hierarchy inevitably have varying influence on the decisionmaking processes, which needs to be reflected by parameters. If the model has been elicited from human experts, it is not feasible to ask them to estimate the parameters because there will be so many in even moderately-sized structures. This paper describes how the parameters could be obtained from data instead, using only a small number of cases. The original method [1] is applied to a particular web-based clinical decision support system called GRiST, which uses its hierarchical knowledge to quantify the risks associated with mental-health problems. The knowledge was elicited from multidisciplinary mental-health practitioners but the tree has several thousand nodes, all requiring an estimation of their relative influence on the assessment process. The method described in the paper shows how they can be obtained from about 200 cases instead. It greatly reduces the experts’ elicitation tasks and has the potential for being generalised to similar knowledge-engineering domains where relative weightings of node siblings are part of the parameter space.
Resumo:
Purpose: To analyse the relationship between measured intraocular pressure (IOP) and central corneal thickness (CCT), corneal hysteresis (CH) and corneal resistance factor (CRF) in ocular hypertension (OHT), primary open-angle (POAG) and normal tension glaucoma (NTG) eyes using multiple tonometry devices. Methods: Right eyes of patients diagnosed with OHT (n=47), normal tension glaucoma (n=17) and POAG (n=50) were assessed, IOP was measured in random order with four devices: Goldmann applanation tonometry (GAT); Pascal(R) dynamic contour tonometer (DCT); Reichert(R) ocular response analyser (ORA); and Tono-Pen(R) XL. CCT was then measured using a hand-held ultrasonic pachymeter. CH and CRF were derived from the air pressure to corneal reflectance relationship of the ORA data. Results: Compared to the GAT, the Tonopen and ORA Goldmann equivalent (IOPg) and corneal compensated (IOPcc) measured higher IOP readings (F=19.351, p<0.001), particularly in NTG (F=12.604, p<0.001). DCT was closest to Goldmann IOP and had the lowest variance. CCT was significantly different (F=8.305, p<0.001) between the 3 conditions as was CH (F=6.854, p=0.002) and CRF (F=19.653, p<0.001). IOPcc measures were not affected by CCT. The DCT was generally not affected by corneal biomechanical factors. Conclusion: This study suggests that as the true pressure of the eye cannot be determined non-invasively, measurements from any tonometer should be interpreted with care, particularly when alterations in the corneal tissue are suspected.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The relative distribution of rare-earth ions R3+ (Dy3+ or Ho3+) in the phosphate glass RAl0.30P3.05O9.62 was measured by employing the method of isomorphic substitution in neutron diffraction and, by taking the role of Al into explicit account, a self-consistent model of the glass structure was developed. The glass network is found to be made from corner sharing PO4 tetrahedra in which there are, on average, 2.32(9) terminal oxygen atoms, OT, at 1.50(1) Å and 1.68(9) bridging oxygen atoms, OB, at 1.60(1) Å. The network modifying R3+ ions bind to an average of 6.7(1) OT and are distributed such that 7.9(7) R–R nearest neighbours reside at 5.62(6) Å. The Al3+ ion also has a network modifying role in which it helps to strengthen the glass through the formation of OT–Al–OT linkages. The connectivity of the R-centred coordination polyhedra in (M2O3)x(P2O5)1−x glasses, where M3+ denotes a network modifying cation (R3+ or Al3+), is quantified in terms of a parameter fs. Methods for reducing the clustering of rare-earth ions in these materials are then discussed, based on a reduction of fs via the replacement of R3+ by Al3+ at fixed total modifier content or via a change of x to increase the number of OT available per network modifying M3+ cation.
Resumo:
Neutron diffraction was used to measure the total structure factors for several rare-earth ion R3+ (La3+ or Ce3+) phosphate glasses with composition close to RAl0.35P3.24O10.12. By assuming isomorphic structures, difference function methods were employed to separate, essentially, those correlations involving R3+ from the remainder. A self-consistent model of the glass structure was thereby developed in which the Al correlations were taken into explicit account. The glass network was found to be made from interlinked PO4 tetrahedra having 2.2(1) terminal oxygen atoms, OT, at 1.51(1) Angstrom, and 1.8(1) bridging oxygen atoms, OB, at 1.60(1) Angstrom. Rare-earth cations bonded to an average of 7.5(2) OT nearest neighbors in a broad and asymmetric distribution. The Al3+ ion acted as a network modifier and formed OT-A1-OT linkages that helped strengthen the glass. The connectivity of the R-centered coordination polyhedra was quantified in terms of a parameter f(s) and used to develop a model for the dependence on composition of the A1-OT coordination number in R-A1-P-O glasses. By using recent 17 A1 nuclear-magnetic-resonance data, it was shown that this connectivity decreases monotonically with increasing Al content. The chemical durability of the glasses appeared to be at a maximum when the connectivity of the R-centered coordination polyhedra was at a minimum. The relation of f(s) to the glass transition temperature, Tg, was discussed.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
The transfer coefficients for momentum and heat have been determined for 10 m neutral wind speeds (U-10n) between 0 and 12 m/s using data from the Surface of the Ocean, Fluxes and Interactions with the Atmosphere (SOFIA) and Structure des Echanges Mer-Atmosphere, Proprietes des Heterogeneites Oceaniques: Recherche Experimentale (SEMAPHORE) experiments. The inertial dissipation method was applied to wind and pseudo virtual temperature spectra from a sonic anemometer, mounted on a platform (ship) which was moving through the turbulence held. Under unstable conditions the assumptions concerning the turbulent kinetic energy (TKE) budget appeared incorrect. Using a bulk estimate for the stability parameter, Z/L (where Z is the height and L is the Obukhov length), this resulted in anomalously low drag coefficients compared to neutral conditions. Determining Z/L iteratively, a low rate of convergence was achieved. It was concluded that the divergence of the turbulent transport of TKE was not negligible under unstable conditions. By minimizing the dependence of the calculated neutral drag coefficient on stability, this term was estimated at about -0.65Z/L. The resulting turbulent fluxes were then in close agreement with other studies at moderate wind speed. The drag and exchange coefficients for low wind speeds were found to be C-en x 10(3) = 2.79U(10n)(-1) + 0.66 (U-10n < 5.2 m/s), C-en x 10(3) = C-hn x 10(3) = 1.2 (U-10n greater than or equal to 5.2 m/s), and C-dn x 10(3) = 11.7U(10n)(-2) + 0.668 (U-10n < 5.5 m/s), which imply a rapid increase of the coefficient values as the wind decreased within the smooth flow regime. The frozen turbulence hypothesis and the assumptions of isotropy and an inertial subrange were found to remain valid at these low wind speeds for these shipboard measurements. Incorporation of a free convection parameterization had little effect.
Resumo:
In the presented thesis work, the meshfree method with distance fields was coupled with the lattice Boltzmann method to obtain solutions of fluid-structure interaction problems. The thesis work involved development and implementation of numerical algorithms, data structure, and software. Numerical and computational properties of the coupling algorithm combining the meshfree method with distance fields and the lattice Boltzmann method were investigated. Convergence and accuracy of the methodology was validated by analytical solutions. The research was focused on fluid-structure interaction solutions in complex, mesh-resistant domains as both the lattice Boltzmann method and the meshfree method with distance fields are particularly adept in these situations. Furthermore, the fluid solution provided by the lattice Boltzmann method is massively scalable, allowing extensive use of cutting edge parallel computing resources to accelerate this phase of the solution process. The meshfree method with distance fields allows for exact satisfaction of boundary conditions making it possible to exactly capture the effects of the fluid field on the solid structure.
Resumo:
Since precise linear actuators of a compliant parallel manipulator suffer from their inability to tolerate the transverse motion/load in the multi-axis motion, actuation isolation should be considered in the compliant manipulator design to eliminate the transverse motion at the point of actuation. This paper presents an effective design method for constructing compliant parallel manipulators with actuation isolation, by adding the same number of actuation legs as the number of the DOF (degree of freedom) of the original mechanism. The method is demonstrated by two design case studies, one of which is quantitatively studied by analytical modelling. The modelling results confirm possible inherent issues of the proposed structure design method such as increased primary stiffness, introduced extra parasitic motions and cross-axis coupling motions.
Resumo:
The mixed double-decker Eu\[Pc(15C5)4](TPP) (1) was obtained by base-catalysed tetramerisation of 4,5-dicyanobenzo-15-crown-5 using the half-sandwich complex Eu(TPP)(acac) (acac = acetylacetonate), generated in situ, as the template. For comparative studies, the mixed triple-decker complexes Eu2\[Pc(15C5)4](TPP)2 (2) and Eu2\[Pc(15C5)4]2(TPP) (3) were also synthesised by the raise-by-one-story method. These mixed ring sandwich complexes were characterised by various spectroscopic methods. Up to four one-electron oxidations and two one-electron reductions were revealed by cyclic voltammetry (CV) and differential pulse voltammetry (DPV). As shown by electronic absorption and infrared spectroscopy, supramolecular dimers (SM1 and SM3) were formed from the corresponding double-decker 1 and triple-decker 3 in the presence of potassium ions in MeOH/CHCl3.
Resumo:
For a sustainable building industry, not only should the environmental and economic indicators be evaluated but also the societal indicators for building. Current indicators can be in conflict with each other, thus decision making is difficult to clearly quantify and assess sustainability. For the sustainable building, the objectives of decreasing both adverse environmental impact and cost are in conflict. In addition, even though both objectives may be satisfied, building management systems may present other problems such as convenience of occupants, flexibility of building, or technical maintenance, which are difficult to quantify as exact assessment data. These conflicting problems confronting building managers or planners render building management more difficult. This paper presents a methodology to evaluate a sustainable building considering socio-economic and environmental characteristics of buildings, and is intended to assist the decision making for building planners or practitioners. The suggested methodology employs three main concepts: linguistic variables, fuzzy numbers, and an analytic hierarchy process. The linguistic variables are used to represent the degree of appropriateness of qualitative indicators, which are vague or uncertain. These linguistic variables are then translated into fuzzy numbers to reflect their uncertainties and aggregated into the final fuzzy decision value using a hierarchical structure. Through a case study, the suggested methodology is applied to the evaluation of a building. The result demonstrates that the suggested approach can be a useful tool for evaluating a building for sustainability.
Resumo:
XML document clustering is essential for many document handling applications such as information storage, retrieval, integration and transformation. An XML clustering algorithm should process both the structural and the content information of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. This paper introduces a novel approach that first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. The proposed method reduces the high dimensionality of input data by using only the structure-constrained content. The empirical analysis reveals that the proposed method can effectively cluster even very large XML datasets and outperform other existing methods.