944 resultados para Connected sum of surfaces


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Development data of eggs and pupae of Xyleborus fornicatus Eichh. (Coleoptera: Scolytidae), the shot-hole borer of tea in Sri Lanka, at constant temperatures were used to evaluate a linear and seven nonlinear models for insect development. Model evaluation was based on fit to data (residual sum of squares and coefficient of determination or coefficient of nonlinear regression), number of measurable parameters, the biological value of the fitted coefficients and accuracy in the estimation of thresholds. Of the nonlinear models, the Lactin model fitted experimental data well and along with the linear model, can be used to describe the temperature-dependent development of this species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We sought to determine the relative impact of myocardial scar and viability on post-infarct left ventricular (LV) remodeling in medically-treated patients with LV dysfunction. Forty patients with chronic ischemic heart disease (age 64±9, EF 40±11%) underwent rest-redistribution Tl201 SPECT (scar = 50% transmural extent), A global index of scarring for each patient (CMR scar score) was calculated as the sum of transmural extent scores in all segts. LV end diastolic volumes (LVEDV) and LV end systolic volumes (LVESV) were measured by real-time threedimensional echo at baseline and median of 12 months follow-up. There was a significant positive correlation between change in LVEDV with number of scar segts by all three imaging techniques (LVEDV: SPECT scar, r = 0.62, p < 0.001; DbE scar, r = 0.57, p < 0.001; CMR scar, r = 0.52, p < 0.001) but change in LV volumes did not the correlate with number of viable segments. ROC curve analysis showed that remodeling (LVEDV> 15%) was predicted bySPECTscars(AUC= 0.79),DbEscars(AUC= 0.76),CMR scars (AUC= 0.70), and CMR scar score (AUC 0.72). There were no significant differences between any of the ROC curves (Z score

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On 20 October 1997 the London Stock Exchange introduced a new trading system called SETS. This system was to replace the dealer system SEAQ, which had been in operation since 1986. Using the iterative sum of squares test introduced by Inclan and Tiao (1994), we investigate whether there was a change in the unconditional variance of opening and closing returns, at the time SETS was introduced. We show that for the FTSE-100 stocks traded on SETS, on the days following its introduction, there was a widespread increase in the volatility of both opening and closing returns. However, no synchronous volatility changes were found to be associated with the FTSE-100 index or FTSE-250 stocks. We conclude therefore that the introduction of the SETS trading mechanism caused an increase in noise at the time the system was introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research has suggested that the A and B share markets of China may be informationally segmented. In this paper volatility patterns in the A and B share market are studied to establish whether volatility changes to the A and B share markets are synchronous. A consequence of new information, when investors act upon it is that volatility rises. This means that if the A and B markets are perfectly integrated volatility changes to each market would be expected to occur at the same time. However, if they are segmented there is no reason for volatility changes to occur on the same day. Using the iterative cumulative sum of squares across the different markets. Evidence is found of integration between the two A share markets but not between the A and B markets. © 2005 Taylor & Francis Group Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: To investigate vergence adaptation during the incipient phase of presbyopia, when the amplitude of accommodation approaches the level where the first reading addition is required. The study aimed to assess the ability of the vergence system to counteract changes in the component contributions to the overall vergence response with the decline in the amplitude of accommodation in presbyopia, although previous reports on the nature of changes in accommodative, tonic and proximal vergence are equivocal. Methods: Using a 'flashed' Maddox rod technique, an assessment of vergence adaptation to 6 Δ base-out and 6Δ base-in prism was made for 28 subjects (aged 35-45 years at the commencement of the study). The measurements were taken four times over a 2-year period. Results: Using a repeated measures analysis of variance, the results show that with the decline in amplitude of accommodation, there is a statistically significant reduction in the magnitude of vergence adaptation to both base-out (p < 0.05) and base-in prism (p < 0.01). Conclusions: This study shows that with ageing, there is a decrease in the ability of the slow vergence mechanism to overcome a change in fusional vergence demand and would suggest that either the fast component of fusional vergence must cope with any change in fusional vergence demand or that the sum of the accommodative, tonic and proximal vergence responses are virtually stable with age. © 2003 The College of Optometrists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper contributes to the literature on the intra-firm diffusion of innovations by investigating the factors that affect the firm’s decision to adopt and use sets of complementary innovations. We define complementary innovations those innovations whose joint use generates super additive gains, i.e. the gain from the joint adoption is higher than the sum of the gains derived from the adoption of each innovation in isolation. From a theoretical perspective, we present a simple decision model, whereby the firm decides ‘whether’ and ‘how much’ to invest in each of the innovations under investigation based upon the expected profit gain from each possible combination of adoption and use. The model shows how the extent of complementarity among the innovations can affect the firm’s profit gains and therefore the likelihood that the firm will adopt these innovations jointly, rather than individually. From an empirical perspective, we focus on four sets of management practices, namely operating (OMP), monitoring (MMP), targets (TMP) and incentives (IMP) management practices. We show that these sets of practices, although to a different extent, are complementary to each other. Then, we construct a synthetic indicator of the depth of their use. The resulting intra-firm index is built to reflect not only the number of practices adopted but also the depth of their individual use and the extent of their complementarity. The empirical testing of the decision model is carried out using the evidence from the adoption behaviour of a sample of 1,238 UK establishments present in the 2004 Workplace Employment Relations Survey (WERS). Our empirical results show that the intra-firm profitability based model is a good model in that it can explain more of the variability of joint adoption than models based upon the variability of adoption and use of individual practices. We also investigate whether a number of firm specific and market characteristics by affecting the size of the gains (which the joint adoption of innovations can generate) may drive the intensity of use of the four innovations. We find that establishment size, whether foreign owned, whether exposed to an international market and the degree of homogeneity of the final product are important determinants of the intensity of the joint adoption of the four innovations. Most importantly, our results point out that the factors that the economics of innovation literature has been showing to affect the intensity of use of a technological innovation do also affect the intensity of use of sets of innovative management practices. However, they can explain only a small part of the diversity of their joint adoption use by the firms in the sample.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topography of the visual evoked magnetic response (VEMR) to pattern reversal stimulation was studied in four normal subjects using a single channel BTI magnetometer. VEMRs were recorded from 20 locations over the occipital scalp and the topographic distribution of the most consistent component (P100M) studied. A single dipole in a sphere model was fitted to the data. Topographic maps were similar when recorded two months apart on the same subject to the same stimulus. Half field (HF) stimulation elicited responses from sources on the medial surface of the calcarine fissure mainly in the contralateral hemisphere as predicted by the cruciform model. The full field (FF) responses to large checks were approximately the sum of the HF responses. However, with small checks, FF stimulation appeared to activate a different combination of sources than the two HFs. In addition, HF topography was more consistent between subjects than FF for small check sizes. Topographic studies of the VEMR may help to explain the analogous visual evoked electrical response and will be essential to define optimal recording positions for clinical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study is to increase our knowledge of the nature of the surface properties of polymeric materials and improve our understanding of how these factors influence the deposition of proteins to form a reactive biological/synthetic interface. A number of surface analytical techniques were identified as being of potential benefit to this investigation and included in a multidisciplinary research program. Cell adhesion in culture was the primary biological sensor of surface properties, and it showed that the cell response to different materials can be modified by adhesion promoting protein layers: cell adhesion is a protein-mediated event. A range of surface rugosity can be produced on polystyrene, and the results presented here show that surface rugosity does not play a major role in determining a material's cell adhesiveness. Contact angle measurements showed that surface energy (specifically the polar fraction) is important in promoting cell spreading on surfaces. The immunogold labelling technique indicated that there were small, but noticeable differences, between the distribution of proteins on a range of surfaces. This study has shown that surface analysis techniques have different sensitivities in terms of detection limits and depth probed, and these are important in determining the usefulness of the information obtained. The techniques provide information on differing aspects of the biological/synthetic interface, and the consequence of this is that a range of techniques is needed in any full study of such a complex field as the biomaterials area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis concerns cell adhesion to polymer surfaces with an experimental emphasis on hydrogels. The thesis begins with a review of the literature and a synthesis of recent evidence to describe the process of cell adhesion in a given situation. The importance of understanding integrin-adhesion protein interactions and adhesion protein-surface interactions is emphasised. The experimental chapters describe three areas of investigation. Firstly, in vitro cell culture techniques are used to explore a variety of surfaces including polyethylene glycol methacrylate (PEGMA) substituted hydrogels, sequence distribution modified hydrogels and worn contact lenses. Cell adhesion to PEGMA substituted gels is found to decrease with increases in polyethylene oxide chain length and correlations are made between sequence distribution and adhesion. Worn contact lenses are investigated for their cell adhesion properties in the presence of antibodies to specific adhesion proteins, demonstrating the presence of vitronectin and fibronectin on the lenses. The second experimental chapter addresses divalent cation regulation of integrin mediated cell adhesion. Several cell types and various cations are used. Zinc, previously not regarded as an important cation in the process, is found to inhibit 3T3 cell adhesion to vitronectin that is promoted by other divalent cations. The final experimental chapter concerns cell adhesion and growth on macroporous hydrogels. A variety of freeze-thaw formed porous gels are investiated and found generally to promote cell growth rate.Interpenetrating networkbased gels (IPN) are made porous by elution of dextrin particles of varying size and loading density. These materials provide the basis for synthetic cartilage. Cartilage cells (chondrocytes) plated onto the surface of the porous IPN materials maintain a rounded shape and hence phenotypic function when a critical pore size and density is achieved. In this way, a prospective implant, made porous at the perpendicular edges contacting natural cartilage can be both mechanically stabilised and encourage the maintenance of normal matrix production at the tissue interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The literature relating to sieve plate liquid extraction columns and relevant hydrodynamic phenomena have been surveyed. Mass transfer characteristics during drop formation, rise and coalescence, and related models were also reviewed. Important design parameters i.e. flooding, dispersed phase hold-up, drop size distribution, mean drop size, coalescence/flocculation zone height beneath a plate and jetting phenomena were investigated under non-mass transfer and mass transfer conditions in a 0.45m diameter, 2.3m high sieve plate column. This column had provision for four different plate designs, and variable plate spacing and downcomer heights, and the system used was Clairsol `350' (dispersed) - acetone - deionised water (continuous) with either direction of mass transfer. Drop size distributions were best described by the functions proposed by Gal-or, and then Mugele-Evans. Using data from this study and the literature, correlations were developed for dispersed phase hold-up, mean drop size in the preferred jetting regime and in the non-jetting regime, and coalescence zone height. A method to calculate the theoretical overall mass transfer coefficient allowing for the range of drop sizes encountered in the column gave the best fit to experimental data. This applied the drop size distribution diagram to estimate the volume percentage of stagnant, circulating and oscillating drops in the drop population. The overall coefficient Kcal was then calculated as the fractional sum of the predicted individual single drop coefficients and their proportion in the drop population. In a comparison between the experimental and calculated overall mass transfer coefficients for cases in which all the drops were in the oscillating regime (i.e. 6.35mm hole size plate), and for transfer from the dispersed(d) to continuous(c) phase, the film coefficient kd predicted from the Rose-Kintner correlation together with kc from that of Garner-Tayeban gave the best representation. Droplets from the 3.175mm hole size plate, were of a size to be mainly circulating and oscillating; a combination of kd from the Kronig-Brink (circulating) and Rose-Kintner (oscillating) correlations with the respective kc gave the best agreement. The optimum operating conditions for the SPC were identified and a procedure proposed for design from basic single drop data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis addresses the economic impacts of construction safety in Greece. The research involved the development of a methodology for determining the overall costs of safety, namely the sum of the costs of accidents and the costs of safety management failures (with or without accident) including image cost. Hitherto, very little work has been published on the cost of accidents in practical case studies. Moreover, to the author’s belief, no research has been published that seeks to determine in real cases the costs of prevention. The methodology developed is new, transparent, and capable of being replicated and adapted to other employment sectors and to other countries. The methodology was applied to three construction projects in Greece to test the safety costing methodology and to offer some preliminary evidence on the business case for safety. The survey work took place between 1999 and 2001 and involved 27 months of costing work on site. The study focuses on the overall costs of safety that apply to the main (principal) contractor. The methodology is supported by 120 discrete cost categories, and systematic criteria for determining which costs are included (counted) in the overall cost of safety. A quality system (in compliance with ISO9000 series) was developed to support the work and ensure accuracy of data gathering. The results of the study offer some support for the business case for safety. Though they offer good support for the economics of safety as they demonstrate need for cost effectiveness. Subject to important caveats, those projects that appeared to manage safety more cost-effectively achieved the lowest overall safety cost. Nevertheless, results are significantly lower than of other published works for two main reasons; first costs due to damages with no potential to injury were not included and second only costs to main constructor were considered. Study’s results are discussed and compared with other publish works.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report observations of the diffraction pattern resulting when a nematic liquid crystal is illuminated with two equal power, high intensity beams of light from an Ar+ laser. The time evolution of the pattern is followed from the initial production of higher diffraction orders to a final striking display arising as a result of the self-diffraction of the two incident beams. The experimental results are described with good approximation by a model assuming a phase distribution at the output plane of the liquid crystal in the form of the sum of a gaussian and a sinusoid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. To determine whether copper incorporated into hospital ward furnishings and equipment can reduce their surface microbial load. Design. A crossover study. Setting. Acute care medical ward with 19 beds at a large university hospital. Methods. Fourteen types of frequent-touch items made of copper alloy were installed in various locations on an acute care medical ward. These included door handles and push plates, toilet seats and flush handles, grab rails, light switches and pull cord toggles, sockets, overbed tables, dressing trolleys, commodes, taps, and sink fittings. Their surfaces and those of equivalent standard items on the same ward were sampled once weekly for 24 weeks. The copper and standard items were switched over after 12 weeks of sampling to reduce bias in usage patterns. The total aerobic microbial counts and the presence of indicator microorganisms were determined. Results. Eight of the 14 copper item types had microbial counts on their surfaces that were significantly lower than counts on standard materials. The other 6 copper item types had reduced microbial numbers on their surfaces, compared with microbial counts on standard items, but the reduction did not reach statistical significance. Indicator microorganisms were recovered from both types of surfaces; however, significantly fewer copper surfaces were contaminated with vancomycin-resistant enterococci, methicillin-susceptible Staphylococcus aureus, and coliforms, compared with standard surfaces. Conclusions. Copper alloys (greater than or equal to 58% copper), when incorporated into various hospital furnishings and fittings, reduce the surface microorganisms. The use of copper in combination with optimal infection-prevention strategies may therefore further reduce the risk that patients will acquire infection in healthcare environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The principle theme of this thesis is the advancement and expansion of ophthalmic research via the collaboration between professional Engineers and professional Optometrists. The aim has been to develop new and novel approaches and solutions to contemporary problems in the field. The work is sub divided into three areas of investigation; 1) High technology systems, 2) Modification of current systems to increase functionality, and 3) Development of smaller more portable and cost effective systems. High Technology Systems: A novel high speed Optical Coherence Tomography (OCT) system with integrated simultaneous high speed photography was developed achieving better operational speed than is currently available commercially. The mechanical design of the system featured a novel 8 axis alignment system. A full set of capture, analysis, and post processing software was developed providing custom analysis systems for ophthalmic OCT imaging, expanding the current capabilities of the technology. A large clinical trial was undertaken to test the dynamics of contact lens edge interaction with the cornea in-vivo. The interaction between lens edge design, lens base curvature, post insertion times and edge positions was investigated. A novel method for correction of optical distortion when assessing lens indentation was also demonstrated. Modification of Current Systems: A commercial autorefractor, the WAM-5500, was modified with the addition of extra hardware and a custom software and firmware solution to produce a system that was capable of measuring dynamic accommodative response to various stimuli in real time. A novel software package to control the data capture process was developed allowing real time monitoring of data by the practitioner, adding considerable functionality of the instrument further to the standard system. The device was used to assess the accommodative response differences between subjects who had worn UV blocking contact lens for 5 years, verses a control group that had not worn UV blocking lenses. While the standard static measurement of accommodation showed no differences between the two groups, it was determined that the UV blocking group did show better accommodative rise and fall times (faster), thus demonstrating the benefits of the modification of this commercially available instrumentation. Portable and Cost effective Systems: A new instrument was developed to expand the capability of the now defunct Keeler Tearscope. A device was developed that provided a similar capability in allowing observation of the reflected mires from the tear film surface, but with the added advantage of being able to record the observations. The device was tested comparatively with the tearscope and other tear film break-up techniques, demonstrating its potential. In Conclusion: This work has successfully demonstrated the advantages of interdisciplinary research between engineering and ophthalmic research has provided new and novel instrumented solutions as well as having added to the sum of scientific understanding in the ophthalmic field.