59 resultados para virtual topology, decomposition, hex meshing algorithms
Resumo:
In this paper, genetic algorithm (GA) is applied to the optimum design of reinforced concrete liquid retaining structures, which comprise three discrete design variables, including slab thickness, reinforcement diameter and reinforcement spacing. GA, being a search technique based on the mechanics of natural genetics, couples a Darwinian survival-of-the-fittest principle with a random yet structured information exchange amongst a population of artificial chromosomes. As a first step, a penalty-based strategy is entailed to transform the constrained design problem into an unconstrained problem, which is appropriate for GA application. A numerical example is then used to demonstrate strength and capability of the GA in this domain problem. It is shown that, only after the exploration of a minute portion of the search space, near-optimal solutions are obtained at an extremely converging speed. The method can be extended to application of even more complex optimization problems in other domains.
Resumo:
In recent work, the concentration index has been widely used as a measure of income-related health inequality. The purpose of this note is to illustrate two different methods for decomposing the overall health concentration index using data collected from a Short Form (SF-36) survey of the general Australian population conducted in 1995. For simplicity, we focus on the physical functioning scale of the SF-36. Firstly we examine decomposition 'by component' by separating the concentration index for the physical functioning scale into the ten items on which it is based. The results show that the items contribute differently to the overall inequality measure, i.e. two of the items contributed 13% and 5%, respectively, to the overall measure. Second, to illustrate the 'by subgroup' method we decompose the concentration index by employment status. This involves separating the population into two groups: individuals currently in employment; and individuals not currently employed. We find that the inequality between these groups is about five times greater than the inequality within each group. These methods provide insights into the nature of inequality that can be used to inform policy design to reduce income related health inequalities. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
Antigen recognition by cytotoxic CD8 T cells is dependent upon a number of critical steps in MHC class I antigen processing including proteosomal cleavage, TAP transport into the endoplasmic reticulum, and MHC class 1 binding. Based on extensive experimental data relating to each of these steps there is now the capacity to model individual antigen processing steps with a high degree of accuracy. This paper demonstrates the potential to bring together models of individual antigen processing steps, for example proteosome cleavage, TAP transport, and MHC binding, to build highly informative models of functional pathways. In particular, we demonstrate how an artificial neural network model of TAP transport was used to mine a HLA-binding database so as to identify H LA-binding peptides transported by TAP. This integrated model of antigen processing provided the unique insight that HLA class I alleles apparently constitute two separate classes: those that are TAP-efficient for peptide loading (HLA-B27, -A3, and -A24) and those that are TAP-inefficient (HLA-A2, -B7, and -B8). Hence, using this integrated model we were able to generate novel hypotheses regarding antigen processing, and these hypotheses are now capable of being tested experimentally. This model confirms the feasibility of constructing a virtual immune system, whereby each additional step in antigen processing is incorporated into a single modular model. Accurate models of antigen processing have implications for the study of basic immunology as well as for the design of peptide-based vaccines and other immunotherapies. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Oral squamous cell carcinoma (OSCC) is associated with high morbidity and mortality which is due, at least in part, to late detection. Precancerous and cancerous oral lesions may mimic any number of benign oral lesions, and as such may be left without investigation and treatment until they are well advanced. Over the past several years there has been renewed interest in oral cytology as an adjuvant clinical tool in the investigation of oral mucosal lesions. The purpose of the present study was to compare the usefulness of ploidy analysis after Feulgen stained cytological thin-prep specimens with traditional incisional biopsy and routine histopathological examination for the assessment of the pre-malignant potential of oral mucosal lesions. An analysis of the cytological specimens was undertaken with virtual microscopy which allowed for rapid and thorough analysis of the complete cytological specimen. 100 healthy individuals between 30 and 70 years of age, who were non-smokers, non-drinkers and not taking any medication, had cytological specimens collected from both the buccal mucosa and lateral margin of tongue to establish normal cytology parameters within a control population. Patients with a presumptive clinical diagnosis of lichen planus, leukoplakia or OSCC had lesional cytological samples taken prior to their diagnostic biopsy. Standardised thin preparations were prepared and each specimen stained by both Feuglen and Papanicolau methods. High speed scanning of the complete slide at 40X magnification was undertaken using the Aperio Scanscope TM and the green channel of the resultant image was analysed after threshold segmentation to isolate only nuclei and the integrated optical density of each nucleus taken as a gross measure of the DNA content (ploidy). Preliminary results reveal that ploidy assessment of oral cytology holds great promise as an adjunctive prognostic factor in the analysis of the malignant potential of oral mucosal lesions.
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
Recent advances in computer technology have made it possible to create virtual plants by simulating the details of structural development of individual plants. Software has been developed that processes plant models expressed in a special purpose mini-language based on the Lindenmayer system formalism. These models can be extended from their architectural basis to capture plant physiology by integrating them with crop models, which estimate biomass production as a consequence of environmental inputs. Through this process, virtual plants will gain the ability to react to broad environmental conditions, while crop models will gain a visualisation component. This integration requires the resolution of the fundamentally different time scales underlying the approaches. Architectural models are usually based on physiological time; each time step encompasses the same amount of development in the plant, without regard to the passage of real time. In contrast, physiological models are based in real time; the amount of development in a time step is dependent on environmental conditions during the period. This paper provides a background on the plant modelling language, then describes how widely-used concepts of thermal time can be implemented to resolve these time scale differences. The process is illustrated using a case study. (C) 1997 Elsevier Science Ltd.
Resumo:
The concept of parameter-space size adjustment is pn,posed in order to enable successful application of genetic algorithms to continuous optimization problems. Performance of genetic algorithms with six different combinations of selection and reproduction mechanisms, with and without parameter-space size adjustment, were severely tested on eleven multiminima test functions. An algorithm with the best performance was employed for the determination of the model parameters of the optical constants of Pt, Ni and Cr.
Resumo:
We suggest a new notion of behaviour preserving transition refinement based on partial order semantics. This notion is called transition refinement. We introduced transition refinement for elementary (low-level) Petri Nets earlier. For modelling and verifying complex distributed algorithms, high-level (Algebraic) Petri nets are usually used. In this paper, we define transition refinement for Algebraic Petri Nets. This notion is more powerful than transition refinement for elementary Petri nets because it corresponds to the simultaneous refinement of several transitions in an elementary Petri net. Transition refinement is particularly suitable for refinement steps that increase the degree of distribution of an algorithm, e.g. when synchronous communication is replaced by asynchronous message passing. We study how to prove that a replacement of a transition is a transition refinement.