878 resultados para COBOL (Computer program language)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reversed-pahse high-performance liquid chromatographic (HPLC) methods were developed for the assay of indomethacin, its decomposition products, ibuprofen and its (tetrahydro-2-furanyl)methyl-, (tetrahydro-2-(2H)pyranyl)methyl- and cyclohexylmethyl esters. The development and application of these HPLC systems were studied. A number of physico-chemical parameters that affect percutaneous absorption were investigated. The pKa values of indomethacin and ibuprofen were determined using the solubility method. Potentiometric titration and the Taft equation were also used for ibuprofen. The incorporation of ethanol or propylene glycol in the solvent resulted in an improvement in the aqueous solubility of these compounds. The partition coefficients were evaluated in order to establish the affinity of these drugs towards the stratum corneum. The stability of indomethacin and of ibuprofen esters were investigated and the effect of temperature and pH on the decomposition rates were studied. The effect of cetyltrimethylammonium bromide on the alkaline degradation of indomethacin was also followed. In the presence of alcohol, indomethacin alcoholysis was observed and the kinetics of decomposition were subjected to non-linear regression analysis and the rate constants for the various pathways were quantified. The non-isothermal, sufactant non-isoconcentration and non-isopH degradation of indomethacin were investigated. The analysis of the data was undertaken using NONISO, a BASIC computer program. The degradation profiles obtained from both non-iso and iso-kinetic studies show that there is close concordance in the results. The metabolic biotransformation of ibuprofen esters was followed using esterases from hog liver and rat skin homogenates. The results showed that the esters were very labile under these conditions. The presence of propylene glycol affected the rates of enzymic hydrolysis of the ester. The hydrolysis is modelled using an equation involving the dielectric constant of the medium. The percutaneous absorption of indomethacin and of ibuprofen and its esters was followed from solutions using an in vitro excised human skin model. The absorption profiles followed first order kinetics. The diffusion process was related to their solubility and to the human skin/solvent partition coefficient. The percutaneous absorption of two ibuprofen esters from suspensions in 20% propylene glycol-water were also followed through rat skin with only ibuprofen being detected in the receiver phase. The sensitivity of ibuprofen esters to enzymic hydrolysis compared to the chemical hydrolysis may prove valuable in the formulation of topical delivery systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computers have, over the past 10 to 15 years, become an integral part of many activities carried out by British community pharmacists. This thesis employs quantitative and qualitative research methods to explore the use of computers and other forms of information technology (IT) in a number of these activities. Mail questionnaires were used to estimate the level of IT use among British community pharmacists in 1989 and 1990. Comparison of the results suggests that the percentage of community pharmacists using computers and other forms of IT is increasing, and that the range of applications to which pharmacy computers are put is expanding. The use of an electronic, on-line information service, PINS, by community pharmacists was investigated using mail questionnaires. The majority of community pharmacists who subscribed to the service, and who responded to the questionnaire, claimed to use PINS less than they had expected to. In addition, most did not find it user-friendly. A computer program to aid pharmacists when responding to their patients' symptoms was investigated using interviews and direct observation. The aid was not found to help pharmacists in responding to patients' symptoms because of impracticalities involved in its operation. Use of the same computer program by members of the public without the involvement of a pharmacist was also studied. In this setting, the program was favourably accepted by the majority of those who used it. Provision of computer generated information leaflets from pharmacies was investigated using mail questionnaires and interviews. The leaflets were found to be popular with the majority of recipients interviewed. Since starting to give out the leaflets, 27 out of 55 pharmacists who responded to the questionnaire had experienced an increase in the numbers of prescriptions they dispensed. 46 had experienced an increase in the number of patient enquiries they received.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research is concerned with the application of operational research techniques in the development of a long- term waste management policy by an English waste disposal authority. The main aspects which have been considered are the estimation of future waste production and the assessment of the effects of proposed systems. Only household and commercial wastes have been dealt with in detail, though suggestions are made for the extension of the effect assessment to cover industrial and other wastes. Similarly, the only effects considered in detail have been costs, but possible extensions are discussed. An important feature of the study is that it was conducted in close collaboration with a waste disposal authority, and so pays more attention to the actual needs of the authority than is usual in such research. A critical examination of previous waste forecasting work leads to the use of simple trend extrapolation methods, with some consideration of seasonal effects. The possibility of relating waste production to other social and economic indicators is discussed. It is concluded that, at present, large uncertainties in predictions are inevitable; waste management systems must therefore be designed to cope with this uncertainty. Linear programming is used to assess the overall costs of proposals. Two alternative linear programming formulations of this problem are used and discussed. The first is a straightforward approach, which has been .implemented as an interactive computer program. The second is more sophisticated and represents the behaviour of incineration plants more realistically. Careful attention is paid to the choice of appropriate data and the interpretation of the results. Recommendations are made on methods for immediate use, on the choice of data to be collected for future plans, and on the most useful lines for further research and development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study of information available on the settlement characteristics of backfill in restored opencast coal mining sites and other similar earthworks projects has been undertaken. In addition, the methods of opencast mining, compaction controls, monitoring and test methods have been reviewed. To consider and develop the methods of predicting the settlement of fill, three sites in the West Midlands have been examined; at each, the backfill had been placed in a controlled manner. In addition, use has been made of a finite element computer program to compare a simple two-dimensional linear elastic analysis with field observations of surface settlements in the vicinity of buried highwalls. On controlled backfill sites, settlement predictions have been accurately made, based on a linear relationship between settlement (expressed as a percentage of fill height) against logarithm of time. This `creep' settlement was found to be effectively complete within 18 months of restoration. A decrease of this percentage settlement was observed with increasing fill thickness; this is believed to be related to the speed with which the backfill is placed. A rising water table within the backfill is indicated to cause additional gradual settlement. A prediction method, based on settlement monitoring, has been developed and used to determine the pattern of settlement across highwalls and buried highwalls. The zone of appreciable differential settlement was found to be mainly limited to the highwall area, the magnitude was dictated by the highwall inclination. With a backfill cover of about 15 metres over a buried highwall the magnitude of differential settlement was negligible. Use has been made of the proposed settlement prediction method and monitoring to control the re-development of restored opencase sites. The specifications, tests and monitoring techniques developed in recent years have been used to aid this. Such techniques have been valuable in restoring land previously derelict due to past underground mining.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research concerns the development and application of an analytical computer program, SAFE-ROC, that models material behaviour and structural behaviour of a slender reinforced concrete column that is part of an overall structure and is subjected to elevated temperatures as a result of exposure to fire. The analysis approach used in SAFE-RCC is non-linear. Computer calculations are used that take account of restraint and continuity, and the interaction of the column with the surrounding structure during the fire. Within a given time step an iterative approach is used to find a deformed shape for the column which results in equilibrium between the forces associated with the external loads and internal stresses and degradation. Non-linear geometric effects are taken into account by updating the geometry of the structure during deformation. The structural response program SAFE-ROC includes a total strain model which takes account of the compatibility of strain due to temperature and loading. The total strain model represents a constitutive law that governs the material behaviour for concrete and steel. The material behaviour models employed for concrete and steel take account of the dimensional changes caused by the temperature differentials and changes in the material mechanical properties with changes in temperature. Non-linear stress-strain laws are used that take account of loading to a strain greater than that corresponding to the peak stress of the concrete stress-strain relation, and model the inelastic deformation associated with unloading of the steel stress-strain relation. The cross section temperatures caused by the fire environment are obtained by a preceding non-linear thermal analysis, a computer program FIRES-T.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of more realistic constitutive models for granular media, such as sand, requires ingredients which take into account the internal micro-mechanical response to deformation. Unfortunately, at present, very little is known about these mechanisms and therefore it is instructive to find out more about the internal nature of granular samples by conducting suitable tests. In contrast to physical testing the method of investigation used in this study employs the Distinct Element Method. This is a computer based, iterative, time-dependent technique that allows the deformation of granular assemblies to be numerically simulated. By making assumptions regarding contact stiffnesses each individual contact force can be measured and by resolution particle centroid forces can be calculated. Then by dividing particle forces by their respective mass, particle centroid velocities and displacements are obtained by numerical integration. The Distinct Element Method is incorporated into a computer program 'Ball'. This program is effectively a numerical apparatus which forms a logical housing for this method and allows data input and output, and also provides testing control. By using this numerical apparatus tests have been carried out on disc assemblies and many new interesting observations regarding the micromechanical behaviour are revealed. In order to relate the observed microscopic mechanisms of deformation to the flow of the granular system two separate approaches have been used. Firstly a constitutive model has been developed which describes the yield function, flow rule and translation rule for regular assemblies of spheres and discs when subjected to coaxial deformation. Secondly statistical analyses have been carried out using data which was extracted from the simulation tests. These analyses define and quantify granular structure and then show how the force and velocity distributions use the structure to produce the corresponding stress and strain-rate tensors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis encompasses an investigation of the behaviour of concrete frame structure under localised fire scenarios by implementing a constitutive model using finite-element computer program. The investigation phase included properties of material at elevated temperature, description of computer program, thermal and structural analyses. Transient thermal properties of material have been employed in this study to achieve reasonable results. The finite-element computer package of ANSYS is utilized in the present analyses to examine the effect of fire on the concrete frame under five various fire scenarios. In addition, a report of full-scale BRE Cardington concrete building designed to Eurocode2 and BS8110 subjected to realistic compartment fire is also presented. The transient analyses of present model included additional specific heat to the base value of dry concrete at temperature 100°C and 200°C. The combined convective-radiation heat transfer coefficient and transient thermal expansion have also been considered in the analyses. For the analyses with the transient strains included, the constitutive model based on empirical formula in a full thermal strain-stress model proposed by Li and Purkiss (2005) is employed. Comparisons between the models with and without transient strains included are also discussed. Results of present study indicate that the behaviour of complete structure is significantly different from the behaviour of individual isolated members based on current design methods. Although the current tabulated design procedures are conservative when the entire building performance is considered, it should be noted that the beneficial and detrimental effects of thermal expansion in complete structures should be taken into account. Therefore, developing new fire engineering methods from the study of complete structures rather than from individual isolated member behaviour is essential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particulate solids are complex redundant systems which consist of discrete particles. The interactions between the particles are complex and have been the subject of many theoretical and experimental investigations. Invetigations of particulate material have been restricted by the lack of quantitative information on the mechanisms occurring within an assembly. Laboratory experimentation is limited as information on the internal behaviour can only be inferred from measurements on the assembly boundary, or the use of intrusive measuring devices. In addition comparisons between test data are uncertain due to the difficulty in reproducing exact replicas of physical systems. Nevertheless, theoretical and technological advances require more detailed material information. However, numerical simulation affords access to information on every particle and hence the micro-mechanical behaviour within an assembly, and can replicate desired systems. To use a computer program to numerically simulate material behaviour accurately it is necessary to incorporte realistic interaction laws. This research programme used the finite difference simulation program `BALL', developed by Cundall (1971), which employed linear spring force-displacement laws. It was thus necessary to incorporate more realistic interaction laws. Therefore, this research programme was primarily concerned with the implementation of the normal force-displacement law of Hertz (1882) and the tangential force-displacement laws of Mindlin and Deresiewicz (1953). Within this thesis the contact mechanics theories employed in the program are developed and the adaptations which were necessary to incorporate these laws are detailed. Verification of the new contact force-displacement laws was achieved by simulating a quasi-static oblique contact and single particle oblique impact. Applications of the program to the simulation of large assemblies of particles is given, and the problems in undertaking quasi-static shear tests along with the results from two successful shear tests are described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Myopia is a refractive condition and develops because either the optical power of the eye is abnormally great or the eye is abnormally long, the optical consequences being that the focal length of the eye is too short for the physical length of the eye. The increase in axial length has been shown to match closely the dioptric error of the eye, in that a lmm increase in axial length usually generates 2 to 3D of myopia. The most common form of myopia is early-onset myopia (EO M) which occurs between 6 to 14 years of age. The second most common form of myopia is late-onset myopia (LOM) which emerges in late teens or early twenties, at a time when the eye should have ceased growing. The prevalence of LOM is increasing and research has indicated a link with excessive and sustained nearwork. The aim of this thesis was to examine the ocular biometric correlates associated with LOM and EOM development and progression. Biometric data was recorded on SO subjects, aged 16 to 26 years. The group was divided into 26 emmetropic subjects and 24 myopic subjects. Keratometry, corneal topography, ultrasonography, lens shape, central and peripheral refractive error, ocular blood flow and assessment of accommodation were measured on three occasions during an ISmonth to 2-year longitudinal study. Retinal contours were derived using a specially derived computer program. The thesis shows that myopia progression is related to an increase in vitreous chamber depth, a finding which supports previous work. The myopes exhibited hyperopic relative peripheral refractive error (PRE) and the emmetropes exhibited myopic relative PRE. Myopes demonstrated a prolate retinal shape and the retina became more prolate with myopia progression. The results show that a longitudinal, rather than equatorial, increase in the posterior segment is the principal structural correlate of myopia. Retinal shape, relative PRE and the ratio of axial length to corneal curvature have been indicated, in this thesis, as predictive factors for myopia onset and development. Data from this thesis demonstrates that myopia progression in the LOM group is the result of an increase in anterior segment power, owing to an increase in lens thickness, in conjunction with posterior segment elongation. Myopia progression in the EOM group is the product of a long posterior segment, which over-compensates for a weak anterior segment power. The weak anterior segment power in the EOM group is related to a combination of crystalline lens thinning and surface flattening. The results presented in this thesis confirm that posterior segment elongation is the main structural correlate in both EOM and LOM progression. The techniques and computer programs employed in the thesis are reproducible and robust providing a valuable framework for further myopia research and assessment of predictive factors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A critical review of previous research revealed that visual attention tests, such as the Useful Field of View (UFOV) test, provided the best means of detecting age-related changes to the visual system that could potentially increase crash risk. However, the question was raised as to whether the UFOV, which was regarded as a static visual attention test, could be improved by inclusion of kinetic targets that more closely represent the driving task. A computer program was written to provide more information about the derivation of UFOV test scores. Although this investigation succeeded in providing new information, some of the commercially protected UFOV test procedures still remain unknown. Two kinetic visual attention tests (DRTS1 and 2), developed at Aston University to investigate inclusion of kinetic targets in visual attention tests, were introduced. The UFOV was found to be more repeatable than either of the kinetic visual attention tests and learning effects or age did not influence these findings. Determinants of static and kinetic visual attention were explored. Increasing target eccentricity led to reduced performance on the UFOV and DRTS1 tests. The DRTS2 was not affected by eccentricity but this may have been due to the style of presentation of its targets. This might also have explained why only the DRTS2 showed laterality effects (i.e. better performance to targets presented on the left hand side of the road). Radial location, explored using the UFOV test, showed that subjects responded best to targets positioned to the horizontal meridian. Distraction had opposite effects on static and kinetic visual attention. While UFOV test performance declined with distraction, DRTS1 performance increased. Previous research had shown that this striking difference was to be expected. Whereas the detection of static targets is attenuated in the presence of distracting stimuli, distracting stimuli that move in a structured flow field enhances the detection of moving targets. Subjects reacted more slowly to kinetic compared to static targets, longitudinal motion compared to angular motion and to increased self-motion. However, the effects of longitudinal motion, angular motion, self-motion and even target eccentricity were caused by target edge speed variations arising because of optic flow field effects. The UFOV test was more able to detect age-related changes to the visual system than were either of the kinetic visual attention tests. The driving samples investigated were too limited to draw firm conclusions. Nevertheless, the results presented showed that neither the DRTS2 nor the UFOV tests were powerful tools for the identification of drivers prone to crashes or poor driving performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many workers have studied the ocular components which occur in eyes exhibiting differing amounts of central refractive error but few have ever considered the additional information that could be derived from a study of peripheral refraction. Before now, peripheral refraction has either been measured in real eyes or has otherwise been modelled in schematic eyes of varying levels of sophistication. Several differences occur between measured and modelled results which, if accounted for, could give rise to more information regarding the nature of the optical and retinal surfaces and their asymmetries. Measurements of ocular components and peripheral refraction, however, have never been made in the same sample of eyes. In this study, ocular component and peripheral refractive measurements were made in a sample of young near-emmetropic, myopic and hyperopic eyes. The data for each refractive group was averaged. A computer program was written to construct spherical surfaced schematic eyes from this data. More sophisticated eye models were developed making use of linear algebraic ray tracing program. This method allowed rays to be traced through toroidal aspheric surfaces which were translated or rotated with respect to each other. For simplicity, the gradient index optical nature of the crystalline lens was neglected. Various alterations were made in these eye models to reproduce the measured peripheral refractive patterns. Excellent agreement was found between the modelled and measured peripheral refractive values over the central 70o of the visual field. This implied that the additional biometric features incorporated in each eye model were representative of those which were present in the measured eyes. As some of these features are not otherwise obtainable using in vivo techniques, it is proposed that the variation of refraction in the periphery offers a very useful optical method for studying human ocular component dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the competitive challenge facing business today, the need to keep cost down and quality up is a matter of survival. One way in which wire manufacturers can meet this challenge is to possess a thorough understanding of deformation, friction and lubrication during the wire drawing process, and therefore to make good decisions regarding the selection and application of lubricants as well as the die design. Friction, lubrication and die design during wire drawing thus become the subject of this study. Although theoretical and experimental investigations have been being carried out ever since the establishment of wire drawing technology, many problems remain unsolved. It is therefore necessary to conduct further research on traditional and fundamental subjects such as the mechanics of deformation, friction, lubrication and die design in wire drawing. Drawing experiments were carried out on an existing bull-block under different cross-sectional area reductions, different speeds and different lubricants. The instrumentation to measure drawing load and drawing speed was set up and connected to the wire drawing machine, together with a data acquisition system. A die box connected to the existing die holder for using dry soap lubricant was designed and tested. The experimental results in terms of drawing stress vs percentage area reduction curves under different drawing conditions were analysed and compared. The effects on drawing stress of friction, lubrication, drawing speed and pressure die nozzle are discussed. In order to determine the flow stress of the material during deformation, tensile tests were performed on an Instron universal test machine, using the wires drawn under different area reductions. A polynomial function is used to correlate the flow stress of the material with the plastic strain, on which a general computer program has been written to find out the coefficients of the stress-strain function. The residual lubricant film on the steel wire after drawing was examined both radially and longitudinally using an SEM and optical microscope. The lubricant film on the drawn wire was clearly observed. Therefore, the micro-analysis by SEM provides a way of friction and lubrication assessment in wire drawing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: To examine the use of image analysis to quantify changes in ocular physiology. Method: A purpose designed computer program was written to objectively quantify bulbar hyperaemia, tarsal redness, corneal staining and tarsal staining. Thresholding, colour extraction and edge detection paradigms were investigated. The repeatability (stability) of each technique to changes in image luminance was assessed. A clinical pictorial grading scale was analysed to examine the repeatability and validity of the chosen image analysis technique. Results: Edge detection using a 3 × 3 kernel was found to be the most stable to changes in image luminance (2.6% over a +60 to -90% luminance range) and correlated well with the CCLRU scale images of bulbar hyperaemia (r = 0.96), corneal staining (r = 0.85) and the staining of palpebral roughness (r = 0.96). Extraction of the red colour plane demonstrated the best correlation-sensitivity combination for palpebral hyperaemia (r = 0.96). Repeatability variability was <0.5%. Conclusions: Digital imaging, in conjunction with computerised image analysis, allows objective, clinically valid and repeatable quantification of ocular features. It offers the possibility of improved diagnosis and monitoring of changes in ocular physiology in clinical practice. © 2003 British Contact Lens Association. Published by Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to be determined the network capacity (number of necessary internal switching lines) based on detailed users’ behaviour and demanded quality of service parameters in an overall telecommunication system. We consider detailed conceptual and its corresponded analytical traffic model of telecommunication system with (virtual) circuit switching, in stationary state with generalized input flow, repeated calls, limited number of homogeneous terminals and losses due to abandoned and interrupted dialing, blocked and interrupted switching, not available intent terminal, blocked and abandoned ringing (absent called user) and abandoned conversation. We propose an analytical - numerical solution for finding the number of internal switching lines and values of the some basic traffic parameters as a function of telecommunication system state. These parameters are requisite for maintenance demand level of network quality of service (QoS). Dependencies, based on the numericalanalytical results are shown graphically. For proposed conceptual and its corresponding analytical model a network dimensioning task (NDT) is formulated, solvability of the NDT and the necessary conditions for analytical solution are researched as well. It is proposed a rule (algorithm) and computer program for calculation of the corresponded number of the internal switching lines, as well as corresponded values of traffic parameters, making the management of QoS easily.