965 resultados para artificial linear structures
Resumo:
The design of liquid-retaining structures involves many decisions to be made by the designer based on rules of thumb, heuristics, judgement, codes of practice and previous experience. Structural design problems are often ill structured and there is a need to develop programming environments that can incorporate engineering judgement along with algorithmic tools. Recent developments in artificial intelligence have made it possible to develop an expert system that can provide expert advice to the user in the selection of design criteria and design parameters. This paper introduces the development of an expert system in the design of liquid-retaining structures using blackboard architecture. An expert system shell, Visual Rule Studio, is employed to facilitate the development of this prototype system. It is a coupled system combining symbolic processing with traditional numerical processing. The expert system developed is based on British Standards Code of Practice BS8007. Explanations are made to assist inexperienced designers or civil engineering students to learn how to design liquid-retaining structures effectively and sustainably in their design practices. The use of this expert system in disseminating heuristic knowledge and experience to practitioners and engineering students is discussed.
Resumo:
We derive necessary and sufficient conditions for the existence of bounded or summable solutions to systems of linear equations associated with Markov chains. This substantially extends a famous result of G. E. H. Reuter, which provides a convenient means of checking various uniqueness criteria for birth-death processes. Our result allows chains with much more general transition structures to be accommodated. One application is to give a new proof of an important result of M. F. Chen concerning upwardly skip-free processes. We then use our generalization of Reuter's lemma to prove new results for downwardly skip-free chains, such as the Markov branching process and several of its many generalizations. This permits us to establish uniqueness criteria for several models, including the general birth, death, and catastrophe process, extended branching processes, and asymptotic birth-death processes, the latter being neither upwardly skip-free nor downwardly skip-free.
Resumo:
Bilateral corneal blindness represents a quarter of the total blind, world-wide. The artificial cornea in assorted forms, was developed to replace opaque non-functional corneas and to return sight in otherwise hopeless cases that were not amenable to corneal grafts; believed to be 2% of corneal blind. Despite technological advances in materials design and tissue engineering no artificial cornea has provided absolute, long-term success. Formidable problems exist, due to a combination of unpredictable wound healing and unmanageable pathology. To have a solid guarantee of reliable success an artificial cornea must possess three attributes: an optical window to replace the opaque cornea; a strong, long term union to surrounding ocular tissue; and the ability to induce desired host responses. A unique artificial cornea possesses all three functional attributes- the Osteo-odonto-keratoprosthesis (OOKP). The OOKP has a high success rate and can survive for up to twenty years, but it is complicated both in structure and in surgical procedure; it is expensive and not universally available. The aim of this project was to develop a synthetic substitute for the OOKP, based upon key features of the tooth and bone structure. In doing so, surgical complexity and biological complications would be reduced. Analysis of the biological effectiveness of the OOKP showed that the structure of bone was the most crucial component for implant retention. An experimental semi-rigid hydroxyapatite framework was fabricated with a complex bone-like architecture, which could be fused to the optical window. The first method for making such a framework, was pressing and sintering of hydroxyapatite powders; however, it was not possible to fabricate a void architecture with the correct sizes and uniformity of pores. Ceramers were synthesised using alternative pore forming methods, providing for improved mechanical properties and stronger attachment to the plastic optical window. Naturally occurring skeletal structures closely match the structural features of all forms of natural bone. Synthetic casts were fabricated using the replamineform process, of desirable natural artifacts, such as coral and sponges. The final method of construction by-passed ceramic fabrication in favour of pre-formed coral derivatives and focused on methods for polymer infiltration, adhesion and fabrication. Prototypes were constructed and evaluated; a fully penetrative synthetic OOKP analogue was fabricated according to the dimensions of the OOKP. Fabrication of the cornea shaped OOKP synthetic analogue was also attempted.
Resumo:
This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.
Resumo:
Knowledge of the molecular structures of solid dispersions is vital, yet, despite thousands of reports in this area, it remains unclear. The aim of this research is to investigate the molecular structure of solid dispersions with hot melt preparation method by the simulated annealing method. Simulation results showed linear polymer chains form the random coils under heat and the drug molecules stick on the surface of polymer coils, while drug molecules are dispersed molecularly but irregularly within the amorphous low molecular weight carriers. This research presents more reasonable molecular images of solid dispersions than the existed theory.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
Whether to assess the functionality of equipment or as a determinate for the accuracy of assays, reference standards are essential for the purposes of standardisation and validation. The ELISPOT assay, developed over thirty years ago, has emerged as a leading immunological assay in the development of novel vaccines for the assessment of efficacy. However, with its widespread use, there is a growing demand for a greater level of standardisation across different laboratories. One of the major difficulties in achieving this goal has been the lack of definitive reference standards. This is partly due to the ex vivo nature of the assay, which relies on cells being placed directly into the wells. Thus, the aim of this thesis was to produce an artificial reference standard using liposomes, for use within the assay. Liposomes are spherical bilayer vesicles with an enclosed aqueous compartment and therefore are models for biological membranes. Initial work examined pre-design considerations in order to produce an optimal formulation that would closely mimic the action of the cells ordinarily placed on the assay. Recognition of the structural differences between liposomes and cells led to the formulation of liposomes with increased density. This was achieved by using a synthesised cholesterol analogue. By incorporating this cholesterol analogue in liposomes, increased sedimentation rates were observed within the first few hours. The optimal liposome formulation from these studies was composed of 2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC), cholesterol (Chol) and brominated cholesterol (Brchol) at a 16:4:12 µMol ratio, based on a significantly higher (p<0.01) sedimentation (as determined by a percentage transmission of 59 ± 5.9 % compared to the control formulation at 29 ± 12 % after four hours). By considering a range of liposome formulations ‘proof of principle’ for using liposomes as ELISPOT reference standards was shown; recombinant IFN? cytokine was successfully entrapped within vesicles of different lipid compositions, which were able to promote spot formation within the ELISPOT assay. Using optimised liposome formulations composed of phosphatidylcholine with or without cholesterol (16 µMol total lipid) further development was undertaken to produce an optimised, scalable protocol for the production of liposomes as reference standards. A linear increase in spot number by the manipulation of cytokine concentration and/or lipid concentrations was not possible, potentially due to the saturation that occurred within the base of wells. Investigations into storage of the formulations demonstrated the feasibility of freezing and lyophilisation with disaccharide cryoprotectants, but also highlighted the need for further protocol optimisation to achieve a robust reference standard upon storage. Finally, the transfer of small-scale production to a medium lab-scale batch (40 mL) demonstrated this was feasible within the laboratory using the optimised protocol.
Resumo:
We study existence, stability, and dynamics of linear and nonlinear stationary modes propagating in radially symmetric multicore waveguides with balanced gain and loss. We demonstrate that, in general, the system can be reduced to an effective PT-symmetric dimer with asymmetric coupling. In the linear case, we find that there exist two modes with real propagation constants before an onset of the PT-symmetry breaking while other modes have always the propagation constants with nonzero imaginary parts. This leads to a stable (unstable) propagation of the modes when gain is localized in the core (ring) of the waveguiding structure. In the case of nonlinear response, we show that an interplay between nonlinearity, gain, and loss induces a high degree of instability, with only small windows in the parameter space where quasistable propagation is observed. We propose a novel stabilization mechanism based on a periodic modulation of both gain and loss along the propagation direction that allows bounded light propagation in the multicore waveguiding structures.
Resumo:
Binary distributed representations of vector data (numerical, textual, visual) are investigated in classification tasks. A comparative analysis of results for various methods and tasks using artificial and real-world data is given.
Resumo:
Two jamming cancellation algorithms are developed based on a stable solution of least squares problem (LSP) provided by regularization. They are based on filtered singular value decomposition (SVD) and modifications of the Greville formula. Both algorithms allow an efficient hardware implementation. Testing results on artificial data modeling difficult real-world situations are also provided.
Resumo:
The paper presents a new network-flow interpretation of Łukasiewicz’s logic based on models with an increased effectiveness. The obtained results show that the presented network-flow models principally may work for multivalue logics with more than three states of the variables i.e. with a finite set of states in the interval from 0 to 1. The described models give the opportunity to formulate various logical functions. If the results from a given model that are contained in the obtained values of the arc flow functions are used as input data for other models then it is possible in Łukasiewicz’s logic to interpret successfully other sophisticated logical structures. The obtained models allow a research of Łukasiewicz’s logic with specific effective methods of the network-flow programming. It is possible successfully to use the specific peculiarities and the results pertaining to the function ‘traffic capacity of the network arcs’. Based on the introduced network-flow approach it is possible to interpret other multivalue logics – of E.Post, of L.Brauer, of Kolmogorov, etc.
Resumo:
The basic construction concepts of many-valued intellectual systems, which are adequate to primal problems of person activity and using hybrid tools with many-valued intellectual systems being two-place, but simulating neuron processes of space toting which are different on a level of actions, inertial and threshold of properties of neuron diaphragms, and also frequency modification of the following transmitted messages are created. All enumerated properties and functions in point of fact are essential not only are discrete on time, but also many-valued.
Resumo:
One major drawback of coherent optical orthogonal frequency-division multiplexing (CO-OFDM) that hitherto remains unsolved is its vulnerability to nonlinear fiber effects due to its high peak-to-average power ratio. Several digital signal processing techniques have been investigated for the compensation of fiber nonlinearities, e.g., digital back-propagation, nonlinear pre- and post-compensation and nonlinear equalizers (NLEs) based on the inverse Volterra-series transfer function (IVSTF). Alternatively, nonlinearities can be mitigated using nonlinear decision classifiers such as artificial neural networks (ANNs) based on a multilayer perceptron. In this paper, ANN-NLE is presented for a 16QAM CO-OFDM system. The capability of the proposed approach to compensate the fiber nonlinearities is numerically demonstrated for up to 100-Gb/s and over 1000km and compared to the benchmark IVSTF-NLE. Results show that in terms of Q-factor, for 100-Gb/s at 1000km of transmission, ANN-NLE outperforms linear equalization and IVSTF-NLE by 3.2dB and 1dB, respectively.
Resumo:
As traffic congestion exuberates and new roadway construction is severely constrained because of limited availability of land, high cost of land acquisition, and communities' opposition to the building of major roads, new solutions have to be sought to either make roadway use more efficient or reduce travel demand. There is a general agreement that travel demand is affected by land use patterns. However, traditional aggregate four-step models, which are the prevailing modeling approach presently, assume that traffic condition will not affect people's decision on whether to make a trip or not when trip generation is estimated. Existing survey data indicate, however, that differences exist in trip rates for different geographic areas. The reasons for such differences have not been carefully studied, and the success of quantifying the influence of land use on travel demand beyond employment, households, and their characteristics has been limited to be useful to the traditional four-step models. There may be a number of reasons, such as that the representation of influence of land use on travel demand is aggregated and is not explicit and that land use variables such as density and mix and accessibility as measured by travel time and congestion have not been adequately considered. This research employs the artificial neural network technique to investigate the potential effects of land use and accessibility on trip productions. Sixty two variables that may potentially influence trip production are studied. These variables include demographic, socioeconomic, land use and accessibility variables. Different architectures of ANN models are tested. Sensitivity analysis of the models shows that land use does have an effect on trip production, so does traffic condition. The ANN models are compared with linear regression models and cross-classification models using the same data. The results show that ANN models are better than the linear regression models and cross-classification models in terms of RMSE. Future work may focus on finding a representation of traffic condition with existing network data and population data which might be available when the variables are needed to in prediction.
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments