41 resultados para Linear Multi-step Formulae
Resumo:
The object of this thesis is to develop a method for calculating the losses developed in steel conductors of circular cross-section and at temperatures below 100oC, by the direct passage of a sinusoidally alternating current. Three cases are considered. 1. Isolated solid or tubular conductor. 2. Concentric arrangement of tube and solid return conductor. 3. Concentric arrangement of two tubes. These cases find applications in process temperature maintenance of pipelines, resistance heating of bars and design of bus-bars. The problems associated with the non-linearity of steel are examined. Resistance heating of bars and methods of surface heating of pipelines are briefly described. Magnetic-linear solutions based on Maxwell's equations are critically examined and conditions under which various formulae apply investigated. The conditions under which a tube is electrically equivalent to a solid conductor and to a semi-infinite plate are derived. Existing solutions for the calculation of losses in isolated steel conductors of circular cross-section are reviewed, evaluated and compared. Two methods of solution are developed for the three cases considered. The first is based on the magnetic-linear solutions and offers an alternative to the available methods which are not universal. The second solution extends the existing B/H step-function approximation method to small diameter conductors and to tubes in isolation or in a concentric arrangement. A comprehensive experimental investigation is presented for cases 1 and 2 above which confirms the validity of the proposed methods of solution. These are further supported by experimental results reported in the literature. Good agreement is obtained between measured and calculated loss values for surface field strengths beyond the linear part of the d.c. magnetisation characteristic. It is also shown that there is a difference in the electrical behaviour of a small diameter conductor or thin tube under resistance or induction heating conditions.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
In Information Filtering (IF) a user may be interested in several topics in parallel. But IF systems have been built on representational models derived from Information Retrieval and Text Categorization, which assume independence between terms. The linearity of these models results in user profiles that can only represent one topic of interest. We present a methodology that takes into account term dependencies to construct a single profile representation for multiple topics, in the form of a hierarchical term network. We also introduce a series of non-linear functions for evaluating documents against the profile. Initial experiments produced positive results.
Combinatorial approach to multi-substituted 1,4-Benzodiazepines as novel non-peptide CCK-antagonists
Resumo:
For the drug discovery process, a library of 168 multisubstituted 1,4-benzodiazepines were prepared by a 5-step solid phase combinatorial approach. Substituents were varied in the 3,5, 7 and 8-position on the benzodiazepine scaffold. The combinatorial library was evaluated in a CCK radiolabelled binding assay and CCKA (alimentary) and CCKB (brain) selective lead structures were discovered. The template of CCKA selective 1,4-benzodiazepin-2-ones bearing the tryptophan moiety was chemically modified by selective alkylation and acylation reactions. These studies provided a series of Asperlicin naturally analogues. The fully optimised Asperlicin related compound possessed a similar CCKA activity as the natural occuring compound. 3-Alkylated 1,4-benzodiazepines with selectivity towards the CCKB receptor subtype were optimised on A) the lipophilic side chain and B) the 2-aminophenyl-ketone moiety, together with some stereochemical changes. A C3 unit in the 3-position of 1,4-benzodiazepines possessed a CCKB activity within the nanomolar range. Further SAR optimisation on the N1-position by selective alkylation resulted in an improved CCKB binding with potentially decreased activity on the GABAA/benzodiazepine receptor complex. The in vivo studies revealed two N1-alkylated compounds containing unsaturated alkyl groups with anxiolytic properties. Alternative chemical approaches have been developed, including a route that is suitable for scale up of the desired target molecule in order to provide sufficient quantities for further in vivo evaluation.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Resumo:
This exploratory study is concerned with the integrated appraisal of multi-storey dwelling blocks which incorporate large concrete panel systems (LPS). The first step was to look at U.K. multi-storey dwelling stock in general, and under the management of Birmingham City Council in particular. The information has been taken from the databases of three departments in the City of Birmingham, and rearranged in a new database using a suite of PC software called `PROXIMA' for clarity and analysis. One hundred of their stock were built large concrete panel system. Thirteen LPS blocks were chosen for the purpose of this study as case-studies depending mainly on the height and age factors of the block. A new integrated appraisal technique has been created for the LPS dwelling blocks, which takes into account the most physical and social factors affecting the condition and acceptability of these blocks. This appraisal technique is built up in a hierarchical form moving from the general approach to particular elements (a tree model). It comprises two main approaches; physical and social. In the physical approach, the building is viewed as a series of manageable elements and sub-elements to cover every single physical or environmental factor of the block, in which the condition of the block is analysed. A quality score system has been developed which depends mainly on the qualitative and quantitative conditions of each category in the appraisal tree model, and leads to physical ranking order of the study blocks. In the social appraisal approach, the residents' satisfaction and attitude toward their multi-storey dwelling block was analysed in relation to: a. biographical and housing related characteristics; and b. social, physical and environmental factors associated with this sort of dwelling, block and estate in general.The random sample consisted of 268 residents living in the 13 case study blocks. Data collected was analysed using frequency counts, percentages, means, standard deviations, Kendall's tue, r-correlation coefficients, t-test, analysis of variance (ANOVA) and multiple regression analysis. The analysis showed a marginally positive satisfaction and attitude towards living in the block. The five most significant factors associated with the residents' satisfaction and attitude in descending order were: the estate, in general; the service categories in the block, including heating system and lift services; vandalism; the neighbours; and the security system of the block. An important attribute of this method, is that it is relatively inexpensive to implement, especially when compared to alternatives adopted by some local authorities and the BRE. It is designed to save time, money and effort, to aid decision making, and to provide ranked priority to the multi-storey dwelling stock, in addition to many other advantages. A series of solution options to the problems of the block was sought for selection and testing before implementation. The traditional solutions have usually resulted in either demolition or costly physical maintenance and social improvement of the blocks. However, a new solution has now emerged, which is particularly suited to structurally sound units. The solution of `re-cycling' might incorporate the reuse of an entire block or part of it, by removing panels, slabs and so forth from the upper floors in order to reconstruct them as low-rise accommodations.
Resumo:
An equivalent step index fibre with a silica core and air cladding is used to model photonic crystal fibres with large air holes. We model this fibre for linear polarisation (we focus on the lowest few transverse modes of the electromagnetic field). The equivalent step index radius is obtained by equating the lowest two eigenvalues of the model to those calculated numerically for the photonic crystal fibres. The step index parameters thus obtained can then be used to calculate nonlinear parameters like the nonlinear effective area of a photonic crystal fibre or to model nonlinear few-mode interactions using an existing model.
Resumo:
This paper explores the use of the optimization procedures in SAS/OR software with application to the contemporary logistics distribution network design using an integrated multiple criteria decision making approach. Unlike the traditional optimization techniques, the proposed approach, combining analytic hierarchy process (AHP) and goal programming (GP), considers both quantitative and qualitative factors. In the integrated approach, AHP is used to determine the relative importance weightings or priorities of alternative warehouses with respect to both deliverer oriented and customer oriented criteria. Then, a GP model incorporating the constraints of system, resource, and AHP priority is formulated to select the best set of warehouses without exceeding the limited available resources. To facilitate the use of integrated multiple criteria decision making approach by SAS users, an ORMCDM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear programming models based on the selected GP model. An example is given to illustrate how one could use the code to design the logistics distribution network.
Resumo:
Three novel solar thermal collector concepts derived from the Linear Fresnel Reflector (LFR) are developed and evaluated through a multi-criteria decision-making methodology, comprising the following techniques: Quality Function Deployment (QFD), the Analytical Hierarchy Process (AHP) and the Pugh selection matrix. Criteria are specified by technical and customer requirements gathered from Gujarat, India. The concepts are compared to a standard LFR for reference, and as a result, a novel 'Elevation Linear Fresnel Reflector' (ELFR) concept using elevating mirrors is selected. A detailed version of this concept is proposed and compared against two standard LFR configurations, one using constant and the other using variable horizontal mirror spacing. Annual performance is analysed for a typical meteorological year. Financial assessment is made through the construction of a prototype. The novel LFR has an annual optical efficiency of 49% and increases exergy by 13-23%. Operational hours above a target temperature of 300 C are increased by 9-24%. A 17% reduction in land usage is also achievable. However, the ELFR suffers from additional complexity and a 16-28% increase in capital cost. It is concluded that this novel design is particularly promising for industrial applications and locations with restricted land availability or high land costs. The decision analysis methodology adopted is considered to have a wider potential for applications in the fields of renewable energy and sustainable design. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
This study critically discusses findings from a research project involving four European countries. The project had two main aims. The first was to develop a systematic procedure for assessing the balance between knowledge and competencies acquired in higher, further and vocational education and the specific needs of the labor market. The second aim was to develop and test a set of meta-level quality indicators aimed at evaluating the linkages between education and employment. The project was designed to address the lack of employer input concerning the requirements of business graduates for successful workplace performance and the need for more specific industry-driven feedback to guide administrative heads at universities and personnel at quality assurance agencies in curriculum development and revision. Approach: The project was distinctive in that it combined different partners from higher education, vocational training, industry and quality assurance. Project partners designed and implemented an innovative approach, based on literature review, qualitative interviews and surveys in the four countries, in order to identify and confirm key knowledge and competency requirements. This study presents this step-by-step approach, as well as survey findings from a sample of 900 business graduates and employers. In addition, it introduces two Partial Least Squares (PLS) path models for predicting satisfaction with work performance and satisfaction with business education. Results: Survey findings revealed that employers were not very confident regarding business graduates’ abilities in key knowledge areas and in key generic competencies. In subsequent analysis, these graduate abilities were tested and identified as important predictors of employers’ satisfaction with graduates’ work performance. Conclusion: The industry-driven approach introduced in this study can serve as a guide to assist different types of educational institutions to better align study programs with changing labor market requirements. Recommendations for curriculum improvement are discussed.
Resumo:
Digital back-propagation (DBP) has recently been proposed for the comprehensive compensation of channel nonlinearities in optical communication systems. While DBP is attractive for its flexibility and performance, it poses significant challenges in terms of computational complexity. Alternatively, phase conjugation or spectral inversion has previously been employed to mitigate nonlinear fibre impairments. Though spectral inversion is relatively straightforward to implement in optical or electrical domain, it requires precise positioning and symmetrised link power profile in order to avail the full benefit. In this paper, we directly compare ideal and low-precision single-channel DBP with single-channel spectral-inversion both with and without symmetry correction via dispersive chirping. We demonstrate that for all the dispersion maps studied, spectral inversion approaches the performance of ideal DBP with 40 steps per span and exceeds the performance of electronic dispersion compensation by ~3.5 dB in Q-factor, enabling up to 96% reduction in complexity in terms of required DBP stages, relative to low precision one step per span based DBP. For maps where quasi-phase matching is a significant issue, spectral inversion significantly outperforms ideal DBP by ~3 dB.
Resumo:
Drawing on the perceived organizational membership theoretical framework and the social identity view of dissonance theory, I examined in this study the dynamics of the relationship between psychological contract breach and organizational identification. I included group-level transformational and transactional leadership as well as procedural justice in the hypothesized model as key antecedents for organizational membership processes. I further explored the mediating role of psychological contract breach in the relationship between leadership, procedural justice climate, and organizational identification and proposed separateness–connectedness self-schema as an important moderator of the above mediated relationship. Hierarchical linear modeling results from a sample of 864 employees from 162 work units in 10 Greek organizations indicated that employees' perception of psychological contract breach negatively affected their organizational identification. I also found psychological contract breach to mediate the impact of transformational and transactional leadership on organizational identification. Results further provided support for moderated mediation and showed that the indirect effects of transformational and transactional leadership on identification through psychological contract breach were stronger for employees with a low connectedness self-schema.
Resumo:
Linear programming (LP) is the most widely used optimization technique for solving real-life problems because of its simplicity and efficiency. Although conventional LP models require precise data, managers and decision makers dealing with real-world optimization problems often do not have access to exact values. Fuzzy sets have been used in the fuzzy LP (FLP) problems to deal with the imprecise data in the decision variables, objective function and/or the constraints. The imprecisions in the FLP problems could be related to (1) the decision variables; (2) the coefficients of the decision variables in the objective function; (3) the coefficients of the decision variables in the constraints; (4) the right-hand-side of the constraints; or (5) all of these parameters. In this paper, we develop a new stepwise FLP model where fuzzy numbers are considered for the coefficients of the decision variables in the objective function, the coefficients of the decision variables in the constraints and the right-hand-side of the constraints. In the first step, we use the possibility and necessity relations for fuzzy constraints without considering the fuzzy objective function. In the subsequent step, we extend our method to the fuzzy objective function. We use two numerical examples from the FLP literature for comparison purposes and to demonstrate the applicability of the proposed method and the computational efficiency of the procedures and algorithms. © 2013-IOS Press and the authors. All rights reserved.
Resumo:
We present a new class of multi-channel Fiber Bragg grating, which provides the characteristics of channelized dispersion but does so with only a single reflection band. Such gratings can provide pure phase control of optical pulses without introducing any deleterious insertion-loss-variation. © 2006 Optical Society of America.
Resumo:
Bio-impedance analysis (BIA) provides a rapid, non-invasive technique for body composition estimation. BIA offers a convenient alternative to standard techniques such as MRI, CT scan or DEXA scan for selected types of body composition analysis. The accuracy of BIA is limited because it is an indirect method of composition analysis. It relies on linear relationships between measured impedance and morphological parameters such as height and weight to derive estimates. To overcome these underlying limitations of BIA, a multi-frequency segmental bio-impedance device was constructed through a series of iterative enhancements and improvements of existing BIA instrumentation. Key features of the design included an easy to construct current-source and compact PCB design. The final device was trialled with 22 human volunteers and measured impedance was compared against body composition estimates obtained by DEXA scan. This enabled the development of newer techniques to make BIA predictions. To add a ‘visual aspect’ to BIA, volunteers were scanned in 3D using an inexpensive scattered light gadget (Xbox Kinect controller) and 3D volumes of their limbs were compared with BIA measurements to further improve BIA predictions. A three-stage digital filtering scheme was also implemented to enable extraction of heart-rate data from recorded bio-electrical signals. Additionally modifications have been introduced to measure change in bio-impedance with motion, this could be adapted to further improve accuracy and veracity for limb composition analysis. The findings in this thesis aim to give new direction to the prediction of body composition using BIA. The design development and refinement applied to BIA in this research programme suggest new opportunities to enhance the accuracy and clinical utility of BIA for the prediction of body composition analysis. In particular, the use of bio-impedance to predict limb volumes which would provide an additional metric for body composition measurement and help distinguish between fat and muscle content.