580 resultados para Placido disks
Resumo:
Issues of wear and tribology are increasingly important in computer hard drives as slider flying heights are becoming lower and disk protective coatings thinner to minimise spacing loss and allow higher areal density. Friction, stiction and wear between the slider and disk in a hard drive were studied using Accelerated Friction Test (AFT) apparatus. Contact Start Stop (CSS) and constant speed drag tests were performed using commercial rigid disks and two different air bearing slider types. Friction and stiction were captured during testing by a set of strain gauges. System parameters were varied to investigate their effect on tribology at the head/disk interface. Chosen parameters were disk spinning velocity, slider fly height, temperature, humidity and intercycle pause. The effect of different disk texturing methods was also studied. Models were proposed to explain the influence of these parameters on tribology. Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) were used to study head and disk topography at various test stages and to provide physical parameters to verify the models. X-ray Photoelectron Spectroscopy (XPS) was employed to identify surface composition and determine if any chemical changes had occurred as a result of testing. The parameters most likely to influence the interface were identified for both CSS and drag testing. Neural Network modelling was used to substantiate results. Topographical AFM scans of disk and slider were exported numerically to file and explored extensively. Techniques were developed which improved line and area analysis. A method for detecting surface contacts was also deduced, results supported and explained observed AFT behaviour. Finally surfaces were computer generated to simulate real disk scans, this allowed contact analysis of many types of surface to be performed. Conclusions were drawn about what disk characteristics most affected contacts and hence friction, stiction and wear.
Resumo:
A wire drive pulse echo method of measuring the spectrum of solid bodies described. Using an 's' plane representation, a general analysis of the transient response of such solids has been carried out. This was used for the study of the stepped amplitude transient of high order modes of disks and for the case where there are two adjacent resonant frequencies. The techniques developed have been applied to the measurenent of the elasticities of refractory materials at high temperatures. In the experimental study of the high order in-plane resonances of thin disks it was found that the energy travelled at the edge of the disk and this initiated the work on one dimensional Rayleigh waves.Their properties were established for the straight edge condition by following an analysis similar to that of the two dimensional case. Experiments were then carried out on the velocity dispersion of various circuits including the disk and a hole in a large plate - the negative curvature condition.Theoretical analysis established the phase and group velocities for these cases and experimental tests on aluminium and glass gave good agreement with theory. At high frequencies all velocities approach that of the one dimensional Rayleigh waves. When applied to crack detection it was observed that a signal burst travelling round a disk showed an anomalous amplitude effect. In certain cases the signal which travelled the greater distance had the greater amplitude.An experiment was designed to investigate the phenanenon and it was established that the energy travelled in two nodes with different velocities.It was found by analysis that as well as the Rayleigh surface wave on the edge, a seoond node travelling at about the shear velocity was excited and the calculated results gave reasonable agreement with the experiments.
Resumo:
Origin of hydrodynamic turbulence in rotating shear flows is investigated. The particular emphasis is on flows whose angular velocities decrease but specific angular momenta increase with increasing radial coordinate. Such flows are Rayleigh stable, but must be turbulent in order to explain observed data. Such a mismatch between the linear theory and observations/experiments is more severe when any hydromagnetic/magnetohydrodynamic instability and the corresponding turbulence therein is ruled out. The present work explores the effect of stochastic noise on such hydrodynamic flows. We focus on a small section of such a flow which is essentially a plane shear flow supplemented by the Coriolis effect. This also mimics a small section of an astrophysical accretion disk. It is found that such stochastically driven flows exhibit large temporal and spatial correlations of perturbation velocities, and hence large energy dissipations, that presumably generate instability. A range of angular velocity profiles (for the steady flow), starting with the constant angular momentum to that of the constant circular velocity are explored. It is shown that the growth and roughness exponents calculated from the contour (envelope) of the perturbed flows are all identical, revealing a unique universality class for the stochastically forced hydrodynamics of rotating shear flows. This work, to the best of our knowledge, is the first attempt to understand origin of instability and turbulence in the three-dimensional Rayleigh stable rotating shear flows by introducing additive stochastic noise to the underlying linearized governing equations. This has important implications in resolving the turbulence problem in astrophysical hydrodynamic flows such as accretion disks.
Resumo:
Origin of hydrodynamic turbulence in rotating shear flows is investigated. The particular emphasis is on flows whose angular velocities decrease but specific angular momenta increase with increasing radial coordinate. Such flows are Rayleigh stable, but must be turbulent in order to explain observed data. Such a mismatch between the linear theory and observations/experiments is more severe when any hydromagnetic/magnetohydrodynamic instability and the corresponding turbulence therein is ruled out. The present work explores the effect of stochastic noise on such hydrodynamic flows. We focus on a small section of such a flow which is essentially a plane shear flow supplemented by the Coriolis effect. This also mimics a small section of an astrophysical accretion disk. It is found that such stochastically driven flows exhibit large temporal and spatial correlations of perturbation velocities, and hence large energy dissipations, that presumably generate instability. A range of angular velocity profiles (for the steady flow), starting with the constant angular momentum to that of the constant circular velocity are explored. It is shown that the growth and roughness exponents calculated from the contour (envelope) of the perturbed flows are all identical, revealing a unique universality class for the stochastically forced hydrodynamics of rotating shear flows. This work, to the best of our knowledge, is the first attempt to understand origin of instability and turbulence in the three-dimensional Rayleigh stable rotating shear flows by introducing additive stochastic noise to the underlying linearized governing equations. This has important implications in resolving the turbulence problem in astrophysical hydrodynamic flows such as accretion disks.
Resumo:
The origin of linear instability resulting in rotating sheared accretion flows has remained a controversial subject for a long time. While some explanations of such non-normal transient growth of disturbances in the Rayleigh stable limit were available for magnetized accretion flows, similar instabilities in the absence of magnetic perturbations remained unexplained. This dichotomy was resolved in two recent publications by Chattopadhyay and co-workers [Mukhopadhyay and Chattopadhyay, J. Phys. A 46, 035501 (2013)1751-811310.1088/1751-8113/46/3/035501; Nath, Phys. Rev. E 88, 013010 (2013)PLEEE81539-375510.1103/PhysRevE.88.013010] where it was shown that such instabilities, especially for nonmagnetized accretion flows, were introduced through interaction of the inherent stochastic noise in the system (even a "cold" accretion flow at 3000 K is too "hot" in the statistical parlance and is capable of inducing strong thermal modes) with the underlying Taylor-Couette flow profiles. Both studies, however, excluded the additional energy influx (or efflux) that could result from nonzero cross correlation of a noise perturbing the velocity flow, say, with the noise that is driving the vorticity flow (or equivalently the magnetic field and magnetic vorticity flow dynamics). Through the introduction of such a time symmetry violating effect, in this article we show that nonzero noise cross correlations essentially renormalize the strength of temporal correlations. Apart from an overall boost in the energy rate (both for spatial and temporal correlations, and hence in the ensemble averaged energy spectra), this results in mutual competition in growth rates of affected variables often resulting in suppression of oscillating Alfven waves at small times while leading to faster saturations at relatively longer time scales. The effects are seen to be more pronounced with magnetic field fluxes where the noise cross correlation magnifies the strength of the field concerned. Another remarkable feature noted specifically for the autocorrelation functions is the removal of energy degeneracy in the temporal profiles of fast growing non-normal modes leading to faster saturation with minimum oscillations. These results, including those presented in the previous two publications, now convincingly explain subcritical transition to turbulence in the linear limit for all possible situations that could now serve as the benchmark for nonlinear stability studies in Keplerian accretion disks.
Resumo:
Objective: Development and validation of a selective and sensitive LCMS method for the determination of methotrexate polyglutamates in dried blood spots (DBS). Methods: DBS samples [spiked or patient samples] were prepared by applying blood to Guthrie cards which was then dried at room temperature. The method utilised 6-mm disks punched from the DBS samples (equivalent to approximately 12 μl of whole blood). The simple treatment procedure was based on protein precipitation using perchloric acid followed by solid phase extraction using MAX cartridges. The extracted sample was chromatographed using a reversed phase system involving an Atlantis T3-C18 column (3 μm, 2.1x150 mm) preceded by Atlantis guard column of matching chemistry. Analytes were subjected to LCMS analysis using positive electrospray ionization. Key Results: The method was linear over the range 5-400 nmol/L. The limits of detection and quantification were 1.6 and 5 nmol/L for individual polyglutamates and 1.5 and 4.5 nmol/L for total polyglutamates, respectively. The method has been applied successfully to the determination of DBS finger-prick samples from 47 paediatric patients and results confirmed with concentrations measured in matched RBC samples using conventional HPLC-UV technique. Conclusions and Clinical Relevance: The methodology has a potential for application in a range of clinical studies (e.g. pharmacokinetic evaluations or medication adherence assessment) since it is minimally invasive and easy to perform, potentially allowing parents to take blood samples at home. The feasibility of using DBS sampling can be of major value for future clinical trials or clinical care in paediatric rheumatology. © 2014 Hawwa et al.
Resumo:
A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.
Resumo:
Two of the greatest crises that civilisation faces in the 21st century are the predicted rapid increases in the ageing population and levels of metabolic disorders such as obesity and type 2 diabetes. A growing amount of evidence now supports the notion that energy balance is a key determinant not only in metabolism but also in the process of cellular ageing. Much of genetic evidence for a metabolic activity-driven ageing process has come from model organisms such as worms and flies where inactivation of the insulin receptor signalling cascade prolongs lifespan. At its most simplistic, this poses a conundrum for ageing in humans – can reduced insulin receptor signalling really promote lifespan and does this relate to insulin resistance seen in ageing? In higher animals, caloric restriction studies have confirmed a longer lifespan when daily calorie intake is reduced to 60% of normal energy requirement. This suggests that for humans, it is energy excess which is a likely driver of metabolic ageing. Interventions that interfere with the metabolic fate of nutrients offer a potentially important target for delaying biological ageing.
Resumo:
Storage is a central part of computing. Driven by exponentially increasing content generation rate and a widening performance gap between memory and secondary storage, researchers are in the perennial quest to push for further innovation. This has resulted in novel ways to "squeeze" more capacity and performance out of current and emerging storage technology. Adding intelligence and leveraging new types of storage devices has opened the door to a whole new class of optimizations to save cost, improve performance, and reduce energy consumption. In this dissertation, we first develop, analyze, and evaluate three storage extensions. Our first extension tracks application access patterns and writes data in the way individual applications most commonly access it to benefit from the sequential throughput of disks. Our second extension uses a lower power flash device as a cache to save energy and turn off the disk during idle periods. Our third extension is designed to leverage the characteristics of both disks and solid state devices by placing data in the most appropriate device to improve performance and save power. In developing these systems, we learned that extending the storage stack is a complex process. Implementing new ideas incurs a prolonged and cumbersome development process and requires developers to have advanced knowledge of the entire system to ensure that extensions accomplish their goal without compromising data recoverability. Futhermore, storage administrators are often reluctant to deploy specific storage extensions without understanding how they interact with other extensions and if the extension ultimately achieves the intended goal. We address these challenges by using a combination of approaches. First, we simplify the storage extension development process with system-level infrastructure that implements core functionality commonly needed for storage extension development. Second, we develop a formal theory to assist administrators deploy storage extensions while guaranteeing that the given high level goals are satisfied. There are, however, some cases for which our theory is inconclusive. For such scenarios we present an experimental methodology that allows administrators to pick an extension that performs best for a given workload. Our evaluation demostrates the benefits of both the infrastructure and the formal theory.
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Changes in mass and nutrient content of wood during decomposition in a south Florida mangrove forest
Resumo:
1. Large pools of dead wood in mangrove forests following disturbances such as hurricanes may influence nutrient fluxes. We hypothesized that decomposition of wood of mangroves from Florida, USA (Avicennia germinans, Laguncularia racemosa and Rhizophora mangle), and the consequent nutrient dynamics, would depend on species, location in the forest relative to freshwater and marine influences and whether the wood was standing, lying on the sediment surface or buried. 2. Wood disks (8–10 cm diameter, 1 cm thick) from each species were set to decompose at sites along the Shark River, either buried in the sediment, on the soil surface or in the air (above both the soil surface and high tide elevation). 3. A simple exponential model described the decay of wood in the air, and neither species nor site had any effect on the decay coefficient during the first 13 months of decomposition. 4. Over 28 months of decomposition, buried and surface disks decomposed following a two-component model, with labile and refractory components. Avicennia germinans had the largest labile component (18 ± 2% of dry weight), while Laguncularia racemosa had the lowest (10 ± 2%). Labile components decayed at rates of 0.37–23.71% month−1, while refractory components decayed at rates of 0.001–0.033% month−1. Disks decomposing on the soil surface had higher decay rates than buried disks, but both were higher than disks in the air. All species had similar decay rates of the labile and refractory components, but A. germinans exhibited faster overall decay because of a higher proportion of labile components. 5. Nitrogen content generally increased in buried and surface disks, but there was little change in N content of disks in the air over the 2-year study. Between 17% and 68% of total phosphorus in wood leached out during the first 2 months of decomposition, with buried disks having the greater losses, P remaining constant or increasing slightly thereafter. 6. Newly deposited wood from living trees was a short-term source of N for the ecosystem but, by the end of 2 years, had become a net sink. Wood, however, remained a source of P for the ecosystem. 7. As in other forested ecosystems, coarse woody debris can have a significant impact on carbon and nutrient dynamics in mangrove forests. The prevalence of disturbances, such as hurricanes, that can deposit large amounts of wood on the forest floor accentuates the importance of downed wood in these forests.
Resumo:
Unequaled improvements in processor and I/O speeds make many applications such as databases and operating systems to be increasingly I/O bound. Many schemes such as disk caching and disk mirroring have been proposed to address the problem. In this thesis we focus only on disk mirroring. In disk mirroring, a logical disk image is maintained on two physical disks allowing a single disk failure to be transparent to application programs. Although disk mirroring improves data availability and reliability, it has two major drawbacks. First, writes are expensive because both disks must be updated. Second, load balancing during failure mode operation is poor because all requests are serviced by the surviving disk. Distorted mirrors was proposed to address the write problem and interleaved declustering to address the load balancing problem. In this thesis we perform a comparative study of these two schemes under various operating modes. In addition we also study traditional mirroring to provide a common basis for comparison.
Resumo:
This study aimed to analyze the biological response of titanium surfaces modified by plasma Ar + N2 + H2. Titanium disks grade II received different surface treatments Ar + N2 + H2 plasma, constituting seven groups including only polished samples used as standard. Before and after treatment the samples were evaluated in terms of topography, crystal structure and wettability, using atomic force microscopy, X-ray diffraction, Raman spectroscopy and testing of the sessile drop, respectively. Rich plasma (PRP) was applied to the surfaces modified in culture plates. Images obtained by scanning electron microscopy of the adhered platelets were analyzed to verify the behavior of platelets in the different experimental conditions. We verified that the adition of H2 on plasma atmosphere resulted in more rough surfaces, with round tops. These surfaces, in contrast to that surfaces treated with high concentration of N2, are less propense to platelet aggregation and, consequently, to the formation of thrombus when applied in biomedical devices.
Resumo:
The discovery of giant stars in the spectral regions G and K, showing moderate to rapid rotation and single behavior, namely with constant radial velocity, represents one important topic of study in Stellar Astrophysics. Indeed, such anomalous rotation clearly violates the theoretical predictions on the evolution of stellar rotation, since in evolved evolutionary stages is expected that the single stars essentially have low rotation due to the evolutionary expansion. This property is well-established from the observational point of view, with different studies showing that for single giant stars of spectral types G and K values of the rotation are typically smaller than 5kms−1 . This Thesis seeks an effective contribution to solving the paradigm described above, aiming to search for single stars of spectral types G and K with anomalous rotation, tipically rotation of moderate to rapid, in other luminosity classes. In this context, we analyzed a large stellar sample consisting of 2010 apparently single stars of luminosity classes IV, III, II and Ib with spectral types G and K, with rotational velocity v sin i and radial velocity measurements obtained from observations made by CORAVEL spectrometers. As a first result of impact we discovered the presence of anomalous rotators also among subgiants, bright giants and supergiants stars, namelly stars of luminosity classes IV, II and Ib, in contrast to previous studies, that reported anomalous rotators only in the luminosity class III classic giants. Such a finding of great significance because it allows us to analyze the presence of anomalous rotation at different intervals of mass, since the luminosity classes considered here cover a mass range between 0.80 and 20MJ, approximately. In the present survey we discovered 1 subgiant, 9 giants, 2 bright giants and 5 Ib supergiants, in spectral regions G and K, with values of v sin i ≥ 10kms−1 and single behavior. This amount of 17 stars corresponds to a frequency of 0.8% of G and K single evolved stars with anomalous rotation in the mentioned classes of luminosities, listed at the Bright Star Catalog, which is complete to visual magnitude 6.3. Given these new findings, based on a stellar sample complete in visual magnitude, as that of the Bright Star Catalog, we conducted a comparative statistical analysis using the Kolmogorov- Smirnov test, from where we conclude that the distributions of rotational velocity, v sin i, for single evolved stars with anomalous rotation in luminosity classes III and II, are similar to the distributions of v sin i for spectroscopic binary systems with evolved components with the same spectral type and luminosity class. This vii result indicates that the process of coalescence between stars of a binary system might be a possible mechanism to explain the observed abnormal rotation in the referred abnormal rotators, at least among the giants and bright giants, where the rotation in excess would be associated with the transfer of angular momentum for the star resulting from the merger. Another important result of this Thesis concerns the behavior of the infrared emission in most of the stars with anomalous rotation here studied, where 14 stars of the sample tend to have an excess in IR compared with single stars with low rotation, within of their luminosity class. This property represents an additional link in the search for the physical mechanisms responsible for the abnormal observed rotation, since recent theoretical studies show that the accretion of objects of sub-stellar mass, such as brown dwarfs and giant planets, by the hosting star, can significantly raise its rotation, producing also a circumstellar dust disk. This last result seems to point in that direction, since it is not expected that dust disks occurring during the stage of star formation can survive until the stages of subgiants, giants and supergiants Ib. In summary, in this Thesis, besides the discovery of single G and K evolved stars of luminosity classes IV, II and Ib with anomalously high rotation compared to what is predicted by stellar evolution theory, we also present the frequency of these abnormal rotators in a stellar sample complete to visual magnitude 6.3. We also present solid evidence that coalescence processes in stellar binary systems and processes of accretion of brown dwarfs star or giant planets, by the hosting stars, can act as mechanisms responsible for the puzzling phenomenon of anomalous rotation in single evolved stars.
Resumo:
Component-based Software Engineering (CBSE) and Service-Oriented Architecture (SOA) became popular ways to develop software over the last years. During the life-cycle of a software system, several components and services can be developed, evolved and replaced. In production environments, the replacement of core components, such as databases, is often a risky and delicate operation, where several factors and stakeholders should be considered. Service Level Agreement (SLA), according to ITILv3’s official glossary, is “an agreement between an IT service provider and a customer. The agreement consists on a set of measurable constraints that a service provider must guarantee to its customers.”. In practical terms, SLA is a document that a service provider delivers to its consumers with minimum quality of service (QoS) metrics.This work is intended to assesses and improve the use of SLAs to guide the transitioning process of databases on production environments. In particular, in this work we propose SLA-Based Guidelines/Process to support migrations from a relational database management system (RDBMS) to a NoSQL one. Our study is validated by case studies.