915 resultados para computer analysis
Resumo:
The finite element process is now used almost routinely as a tool of engineering analysis. From early days, a significant effort has been devoted to developing simple, cost effective elements which adequately fulfill accuracy requirements. In this thesis we describe the development and application of one of the simplest elements available for the statics and dynamics of axisymmetric shells . A semi analytic truncated cone stiffness element has been formulated and implemented in a computer code: it has two nodes with five degrees of freedom at each node, circumferential variations in displacement field are described in terms of trigonometric series, transverse shear is accommodated by means of a penalty function and rotary inertia is allowed for. The element has been tested in a variety of applications in the statics and dynamics of axisymmetric shells subjected to a variety of boundary conditions. Good results have been obtained for thin and thick shell cases .
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
The conventional design of forming rolls depends heavily on the individual skill of roll designers which is based on intuition and knowledge gained from previous work. Roll design is normally a trial an error procedure, however with the progress of computer technology, CAD/CAM systems for the cold roll-forming industry have been developed. Generally, however, these CAD systems can only provide a flower pattern based on the knowledge obtained from previously successful flower patterns. In the production of ERW (Electric Resistance Welded) tube and pipe, the need for a theoretical simulation of the roll-forming process, which can not only predict the occurrence of the edge buckling but also obtain the optimum forming condition, has been recognised. A new simulation system named "CADFORM" has been devised that can carry out the consistent forming simulation for this tube-making process. The CADFORM system applied an elastic-plastic stress-strain analysis and evaluate edge buckling by using a simplified model of the forming process. The results can also be visualised graphically. The calculated longitudinal strain is obtained by considering the deformation of lateral elements and takes into account the reduction in strains due to the fin-pass roll. These calculated strains correspond quite well with the experimental results. Using the calculated strains, the stresses in the strip can be estimated. The addition of the fin-pass roll reduction significantly reduces the longitudinal compressive stress and therefore effectively suppresses edge buckling. If the calculated longitudinal stress is controlled, by altering the forming flower pattern so it does not exceed the buckling stress within the material, then the occurrence of edge buckling can be avoided. CADFORM predicts the occurrence of edge buckling of the strip in tube-making and uses this information to suggest an appropriate flower pattern and forming conditions which will suppress the occurrence of the edge buckling.
Resumo:
Purpose-To develop a non-invasive method for quantification of blood and pigment distributions across the posterior pole of the fundus from multispectral images using a computer-generated reflectance model of the fundus. Methods - A computer model was developed to simulate light interaction with the fundus at different wavelengths. The distribution of macular pigment (MP) and retinal haemoglobins in the fundus was obtained by comparing the model predictions with multispectral image data at each pixel. Fundus images were acquired from 16 healthy subjects from various ethnic backgrounds and parametric maps showing the distribution of MP and of retinal haemoglobins throughout the posterior pole were computed. Results - The relative distributions of MP and retinal haemoglobins in the subjects were successfully derived from multispectral images acquired at wavelengths 507, 525, 552, 585, 596, and 611?nm, providing certain conditions were met and eye movement between exposures was minimal. Recovery of other fundus pigments was not feasible and further development of the imaging technique and refinement of the software are necessary to understand the full potential of multispectral retinal image analysis. Conclusion - The distributions of MP and retinal haemoglobins obtained in this preliminary investigation are in good agreement with published data on normal subjects. The ongoing development of the imaging system should allow for absolute parameter values to be computed. A further study will investigate subjects with known pathologies to determine the effectiveness of the method as a screening and diagnostic tool.
Resumo:
Sponsorship fit is frequently mentioned and empirically examined as a success factor of sponsorship. While sponsorship fit has been considered as a determinant of sponsorship success, little knowledge exists about the antecedents of sponsorship fit. In the present paper, individual and firm-level antecedents of sponsorship fit are examined in a single hierarchical linear model. Results show that sponsorship fit is influenced by the perception of benefits, the firm’s regional identification, sincerity, relatedness to the sponsored activity, and its dominance. On a partnership level, results show that contract length contributes to sponsorship fit while contract value is found to be unrelated.
Resumo:
Shopping behavior is often exclusively studied through consumer purchases, since they are an easily measurable ouput. Still, the observation of in-store physical behavior (path, moves and actions) is crucial, as is the quantification of its impact on purchases. Using an innovative PDA tool to precisely record and time stamp consumers' moves and actions, we extend the classical Market Basket Analysis (MBA) by integrating this new information: associations between product categories are measured not only from purchases but also from consumer physical behavior. We compare results of our new method with classical MBA results and show a significant improvement.
Resumo:
In Statnote 9, we described a one-way analysis of variance (ANOVA) ‘random effects’ model in which the objective was to estimate the degree of variation of a particular measurement and to compare different sources of variation in space and time. The illustrative scenario involved the role of computer keyboards in a University communal computer laboratory as a possible source of microbial contamination of the hands. The study estimated the aerobic colony count of ten selected keyboards with samples taken from two keys per keyboard determined at 9am and 5pm. This type of design is often referred to as a ‘nested’ or ‘hierarchical’ design and the ANOVA estimated the degree of variation: (1) between keyboards, (2) between keys within a keyboard, and (3) between sample times within a key. An alternative to this design is a 'fixed effects' model in which the objective is not to measure sources of variation per se but to estimate differences between specific groups or treatments, which are regarded as 'fixed' or discrete effects. This statnote describes two scenarios utilizing this type of analysis: (1) measuring the degree of bacterial contamination on 2p coins collected from three types of business property, viz., a butcher’s shop, a sandwich shop, and a newsagent and (2) the effectiveness of drugs in the treatment of a fungal eye infection.
Resumo:
Sentiment analysis over Twitter offer organisations a fast and effective way to monitor the publics' feelings towards their brand, business, directors, etc. A wide range of features and methods for training sentiment classifiers for Twitter datasets have been researched in recent years with varying results. In this paper, we introduce a novel approach of adding semantics as additional features into the training set for sentiment analysis. For each extracted entity (e.g. iPhone) from tweets, we add its semantic concept (e.g. Apple product) as an additional feature, and measure the correlation of the representative concept with negative/positive sentiment. We apply this approach to predict sentiment for three different Twitter datasets. Our results show an average increase of F harmonic accuracy score for identifying both negative and positive sentiment of around 6.5% and 4.8% over the baselines of unigrams and part-of-speech features respectively. We also compare against an approach based on sentiment-bearing topic analysis, and find that semantic features produce better Recall and F score when classifying negative sentiment, and better Precision with lower Recall and F score in positive sentiment classification.
Resumo:
Link adaptation (LA) plays an important role in adapting an IEEE 802.11 network to wireless link conditions and maximizing its capacity. However, there is a lack of theoretic analysis of IEEE 802.11 LA algorithms. In this article, we propose a Markov chain model for an 802.11 LA algorithm (ONOE algorithm), aiming to identify the problems and finding the space of improvement for LA algorithms. We systematically model the impacts of frame corruption and collision on IEEE 802.11 network performance. The proposed analytic model was verified by computer simulations. With the analytic model, it can be observed that ONOE algorithm performance is highly dependent on the initial bit rate and parameter configurations. The algorithm may perform badly even under light channel congestion, and thus, ONOE algorithm parameters should be configured carefully to ensure a satisfactory system performance. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
This paper reports findings of a two year study concerning the development and implementation of a general-purpose computer-based assessment (CBA) system at a UK University. Data gathering took place over a period of nineteen months, involving a number of formative and summative assessments. Approximately 1,000 students, drawn from undergraduate courses, were involved in the exercise. The techniques used in gathering data included questionnaires, observation, interviews and an analysis of student scores in both conventional examinations and computer-based assessments. Comparisons with conventional assessment methods suggest that the use of CBA techniques may improve the overall performance of students. However it is clear that the technique must not be seen as a "quick fix" for problems such as rising student numbers. If one accepts that current systems test only a relatively narrow range of skills, then the hasty implementation of CBA systems will result in a distorted and inaccurate view of student performance. In turn, this may serve to reduce the overall quality of courses and - ultimately - detract from the student learning experience. On the other hand, if one adopts a considered and methodical approach to computer-based assessment, positive benefits might include increased efficiency and quality, leading to improved student learning.
Resumo:
Population measures for genetic programs are defined and analysed in an attempt to better understand the behaviour of genetic programming. Some measures are simple, but do not provide sufficient insight. The more meaningful ones are complex and take extra computation time. Here we present a unified view on the computation of population measures through an information hypertree (iTree). The iTree allows for a unified and efficient calculation of population measures via a basic tree traversal. © Springer-Verlag 2004.
Resumo:
Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.
Resumo:
Purpose - To generate a reflectance model of the fundus that allows an accurate non-invasive quantification of blood and pigments. Methods - A Monte Carlo simulation was used to produce a mathematical model of light interaction with the fundus at different wavelengths. The model predictions were compared with fundus images from normal volunteers in several spectral bands (peaks at 507, 525, 552, 585, 596 and 611nm). Th e model was then used to calculate the concentration and distribution of the known absorbing components of the fundus. Results - The shape of the statistical distribution of the image data generally corresponded to that of the model data; the model however appears to overestimate the reflectance of the fundus in the longer wavelength region.As the absorption by xanthophyll has no significant eff ect on light transport above 534nm, its distribution in the fundus was quantified: the wavelengths where both shape and distribution of image and model data matched (<553nm) were used to train a neural network which was then applied to every point in the image data. The xanthophyll distribution thus found was in agreement with published literature data in normal subjects. Conclusion - We have developed a method for optimising multi-spectral imaging of the fundus and a computer image analysis capable of estimating information about the structure and properties of the fundus. Th e technique successfully calculates the distribution of xanthophyll in the fundus of healthy volunteers. Further improvement of the model is required to allow the deduction of other parameters from images; investigations in known pathology models are also necessary to establish if this method is of clinical use in detecting early chroido-retinopathies, hence providing a useful screening and diagnostic tool.
Resumo:
In this paper we present the design and analysis of an intonation model for text-to-speech (TTS) synthesis applications using a combination of Relational Tree (RT) and Fuzzy Logic (FL) technologies. The model is demonstrated using the Standard Yorùbá (SY) language. In the proposed intonation model, phonological information extracted from text is converted into an RT. RT is a sophisticated data structure that represents the peaks and valleys as well as the spatial structure of a waveform symbolically in the form of trees. An initial approximation to the RT, called Skeletal Tree (ST), is first generated algorithmically. The exact numerical values of the peaks and valleys on the ST is then computed using FL. Quantitative analysis of the result gives RMSE of 0.56 and 0.71 for peak and valley respectively. Mean Opinion Scores (MOS) of 9.5 and 6.8, on a scale of 1 - -10, was obtained for intelligibility and naturalness respectively.