879 resultados para Lead based paint
Resumo:
Purpose: To analyze and define the possible errors that may be introduced in keratoconus classification when the keratometric corneal power is used in such classification. Materials and methods: Retrospective study including a total of 44 keratoconus eyes. A comprehensive ophthalmologic examination was performed in all cases, which included a corneal analysis with the Pentacam system (Oculus). Classical keratometric corneal power (Pk), Gaussian corneal power (Pc Gauss), True Net Power (TNP) (Gaussian power neglecting the corneal thickness effect), and an adjusted keratometric corneal power (Pkadj) (keratometric power considering a variable keratometric index) were calculated. All cases included in the study were classified according to five different classification systems: Alió-Shabayek, Amsler-Krumeich, Rabinowitz-McDonnell, collaborative longitudinal evaluation of keratoconus (CLEK), and McMahon. Results: When Pk and Pkadj were compared, differences in the type of grading of keratoconus cases was found in 13.6% of eyes when the Alió-Shabayek or the Amsler-Krumeich systems were used. Likewise, grading differences were observed in 22.7% of eyes with the Rabinowitz-McDonnell and McMahon classification systems and in 31.8% of eyes with the CLEK classification system. All reclassified cases using Pkadj were done in a less severe stage, indicating that the use of Pk may lead to the classification of a cornea as keratoconus, being normal. In general, the results obtained using Pkadj, Pc Gauss or the TNP were equivalent. Differences between Pkadj and Pc Gauss were within ± 0.7D. Conclusion: The use of classical keratometric corneal power may lead to incorrect grading of the severity of keratoconus, with a trend to a more severe grading.
Resumo:
Hypertrophic cardiomyopathy (HCM) is a cardiovascular disease where the heart muscle is partially thickened and blood flow is - potentially fatally - obstructed. It is one of the leading causes of sudden cardiac death in young people. Electrocardiography (ECG) and Echocardiography (Echo) are the standard tests for identifying HCM and other cardiac abnormalities. The American Heart Association has recommended using a pre-participation questionnaire for young athletes instead of ECG or Echo tests due to considerations of cost and time involved in interpreting the results of these tests by an expert cardiologist. Initially we set out to develop a classifier for automated prediction of young athletes’ heart conditions based on the answers to the questionnaire. Classification results and further in-depth analysis using computational and statistical methods indicated significant shortcomings of the questionnaire in predicting cardiac abnormalities. Automated methods for analyzing ECG signals can help reduce cost and save time in the pre-participation screening process by detecting HCM and other cardiac abnormalities. Therefore, the main goal of this dissertation work is to identify HCM through computational analysis of 12-lead ECG. ECG signals recorded on one or two leads have been analyzed in the past for classifying individual heartbeats into different types of arrhythmia as annotated primarily in the MIT-BIH database. In contrast, we classify complete sequences of 12-lead ECGs to assign patients into two groups: HCM vs. non-HCM. The challenges and issues we address include missing ECG waves in one or more leads and the dimensionality of a large feature-set. We address these by proposing imputation and feature-selection methods. We develop heartbeat-classifiers by employing Random Forests and Support Vector Machines, and propose a method to classify full 12-lead ECGs based on the proportion of heartbeats classified as HCM. The results from our experiments show that the classifiers developed using our methods perform well in identifying HCM. Thus the two contributions of this thesis are the utilization of computational and statistical methods for discovering shortcomings in a current screening procedure and the development of methods to identify HCM through computational analysis of 12-lead ECG signals.
Resumo:
A comprehensive approach to sport expertise should consider the entire situation that is comprised of the person, the task, the environment, and the complex interplay of these components (Hackfort, 1986). Accordingly, the Developmental Model of Sport Participation (Côté, Baker, & Abernethy, 2007; Côté & Fraser-Thomas, 2007) provides a comprehensive framework for sport expertise that outlines different pathways of involvement in sport. In pathways one and two, early sampling serves as the foundation for both elite and recreational sport participation. Early sampling is based on two main elements of childhood sport participation: 1) involvement in various sports and 2) participation in deliberate play. In contrast, pathway three shows the course to elite performance through early specialization in one sport. Early specialization implies a focused involvement on one sport and a large number of deliberate practice activities with the goal of improving sport skills and performance during childhood. This paper proposes seven postulates regarding the role that sampling and deliberate play, as opposed to specialization and deliberate practice, can have during childhood in promoting continued participation and elite performance in sport.
Resumo:
Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.
Resumo:
Population balances of polymer species in terms 'of discrete transforms with respect to counts of groups lead to tractable first order partial differential equations when ali rate constants are independent of chain length and loop formation is negligible [l]. Average molecular weights in the absence ofgelation are long known to be readily found through integration of an initial value problem. The extension to size distribution prediction is also feasible, but its performance is often lower to the one provided by methods based upon real chain length domain [2]. Moreover, the absence ofagood starting procedure and a higher numerical sensitivity hás decisively impaired its application to non-linear reversibly deactivated polymerizations, namely NMRP [3].
Resumo:
Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.
Resumo:
Two cores, Site 1089 (ODP Leg 177) and PS2821-1, recovered from the same location (40°56'S; 9°54'E) at the Subtropical Front (STF) in the Atlantic Sector of the Southern Ocean, provide a high-resolution climatic record, with an average temporal resolution of less than 600 yr. A multi-proxy approach was used to produce an age model for Core PS2821-1, and to correlate the two cores. Both cores document the last climatic cycle, from Marine Isotopic Stage 6 (MIS 6, ca. 160 kyr BP, ka) to present. Summer sea-surface temperatures (SSSTs) have been estimated, with a standard error of ca. +/-1.16°C, for the down core record by using Q-mode factor analysis (Imbrie and Kipp method). The paleotemperatures show a 7°C warming at Termination II (last interglacial, transition from MIS 6 to MIS 5). This transition from glacial to interglacial paleotemperatures (with maximum temperatures ca. 3°C warmer than present at the core location) occurs earlier than the corresponding shift in delta18O values for benthic foraminifera from the same core; this suggests a lead of Southern Ocean paleotemperature changes compared to the global ice-volume changes, as indicated by the benthic isotopic record. The climatic evolution of the record continues with a progressive temperature deterioration towards MIS 2. High-frequency, millennial-scale climatic instability has been documented for MIS 3 and part of MIS 4, with sudden temperature variations of almost the same magnitude as those observed at the transitions between glacial and interglacial times. These changes occur during the same time interval as the Dansgaard-Oeschger cycles recognized in the delta18Oice record of the GRIP and GISP ice cores from Greenland, and seem to be connected to rapid changes in the STF position in relation to the core location. Sudden cooling episodes ('Younger Dryas (YD)-type' and 'Antarctic Cold Reversal (ACR)-type' of events) have been recognized for both Termination I (ACR-I and YD-I events) and II (ACR-II and YD-II events), and imply that our core is located in an optimal position in order to record events triggered by phenomena occurring in both hemispheres. Spectral analysis of our SSST record displays strong analogies, particularly for high, sub-orbital frequencies, to equivalent records from Vostok (Antarctica) and from the Subtropical North Atlantic ocean. This implies that the climatic variability of widely separated areas (the Antarctic continent, the Subtropical North Atlantic, and the Subantarctic South Atlantic) can be strongly coupled and co-varying at millennial time scales (a few to 10-ka periods), and eventually induced by the same triggering mechanisms. Climatic variability has also been documented for supposedly warm and stable interglacial intervals (MIS 1 and 5), with several cold events which can be correlated to other Southern Ocean and North Atlantic sediment records.
Resumo:
Description based on: 1994; title from cover.
Resumo:
"April 1992."
Resumo:
Based on this new data, the Illinois EPA has requested that the Illinois Attorney General initiate legal action against H. Kramer relative to its contribution to a violation of the lead National Ambient Air Quality Standard.
Resumo:
Traditional vaccines consisting of whole attenuated micro-organisms. or microbial components administered with adjuvant, have been demonstrated as one of the most cost-effective and successful public health interventions. Their use in large scale immunisation programs has lead to the eradication of smallpox, reduced morbidity and mortality from many once common diseases, and reduced strain on health services. However, problems associated with these vaccines including risk of infection. adverse effects, and the requirement for refrigerated transport and storage have led to the investigation of alternative vaccine technologies. Peptide vaccines, consisting of either whole proteins or individual peptide epitopes, have attracted much interest, as they may be synthesised to high purity and induce highly specific immune responses. However, problems including difficulties stimulating long lasting immunity. and population MHC diversity necessitating multiepitopic vaccines and/or HLA tissue typing of patients complicate their development. Furthermore, toxic adjuvants are necessary to render them immunogenic. and as such non-toxic human-compatible adjuvants need to be developed. Lipidation has been demonstrated as a human compatible adjuvant for peptide vaccines. The lipid-core-peptide (LCP) system. incorporating lipid adjuvant, carrier, and peptide epitopes, exhibits promise as a lipid-based peptide vaccine adjuvant. The studies reviewed herein investigate the use of the LCP system for developing vaccines to protect against group A streptococcal (GAS) infection. The studies demonstrate that LCP-based GAS vaccines are capable of inducing high-titres of antigen specific IgG antibodies. Furthermore. mice immunised with an LCP-based GAS vaccine were protected against challenge with 8830 strain GAS.
Resumo:
Lead compounds are known genotoxicants, principally affecting the integrity of chromosomes. Lead chloride and lead acetate induced concentration-dependent increases in micronucleus frequency in V79 cells, starting at 1.1 μ M lead chloride and 0.05 μ M lead acetate. The difference between the lead salts, which was expected based on their relative abilities to form complex acetato-cations, was confirmed in an independent experiment. CREST analyses of the micronuclei verified that lead chloride and acetate were predominantly aneugenic (CREST-positive response), which was consistent with the morphology of the micronuclei (larger micronuclei, compared with micronuclei induced by a clastogenic mechanism). The effects of high concentrations of lead salts on the microtubule network of V79 cells were also examined using immunofluorescence staining. The dose effects of these responses were consistent with the cytotoxicity of lead(II), as visualized in the neutral-red uptake assay. In a cell-free system, 20-60 μ M lead salts inhibited tubulin assembly dose-dependently. The no-observed-effect concentration of lead(II) in this assay was 10 μ M. This inhibitory effect was interpreted as a shift of the assembly/disassembly steady-state toward disassembly, e.g., by reducing the concentration of assembly-competent tubulin dimers. The effects of lead salts on microtubule-associated motor-protein functions were studied using a kinesin-gliding assay that mimics intracellular transport processes in vitro by quantifying the movement of paclitaxel-stabilized microtubules across a kinesin-coated glass surface. There was a dose-dependent effect of lead nitrate on microtubule motility. Lead nitrate affected the gliding velocities of microtubules starting at concentrations above 10 μ M and reached half-maximal inhibition of motility at about 50 μ M. The processes reported here point to relevant interactions of lead with tubulin and kinesin at low dose levels. Environ. Mal. Mutagen. 45:346-353, 2005. © 2005 Wiley-Liss, Inc.
Resumo:
Purpose – The objective of the present research is to examine the relationship between consumers' satisfaction with a retailer and the equity they associate with the retail brand. Design/methodology/approach – Retail brand equity is conceptualized as a four-dimensional construct comprising: retailer awareness, retailer associations, retailer perceived quality, and retailer loyalty. Then the associative network memory model is applied from cognitive psychology to the specific context of the relationships between customer satisfaction and consumer-based retailer equity. A survey was undertaken using a convenience sample of shopping mall consumers in an Australian state capital city. The questionnaire used to collect data included an experimental design such that two categories of retailers were included in the study: department stores and specialty stores, with three retailers representing each category. The relationship between consumer-based retailer equity and customer satisfaction was examined using multivariate analysis of variance. Findings – Results indicate that retail brand equity varies with customer satisfaction. For department stores, each consumer-based retailer equity dimension varied according to customer satisfaction with the retailer. However, for specialty stores, only three of the consumer-based retailer equity dimensions, namely retailer awareness, retailer associations and retailer perceived quality, varied according to customer satisfaction level with the retailer. Originality/value – The principal contribution of the present research is that it demonstrates empirically a positive relationship between customer satisfaction and an intangible asset such as retailer equity.
Resumo:
An appreciation of the physical mechanisms which cause observed seismicity complexity is fundamental to the understanding of the temporal behaviour of faults and single slip events. Numerical simulation of fault slip can provide insights into fault processes by allowing exploration of parameter spaces which influence microscopic and macroscopic physics of processes which may lead towards an answer to those questions. Particle-based models such as the Lattice Solid Model have been used previously for the simulation of stick-slip dynamics of faults, although mainly in two dimensions. Recent increases in the power of computers and the ability to use the power of parallel computer systems have made it possible to extend particle-based fault simulations to three dimensions. In this paper a particle-based numerical model of a rough planar fault embedded between two elastic blocks in three dimensions is presented. A very simple friction law without any rate dependency and no spatial heterogeneity in the intrinsic coefficient of friction is used in the model. To simulate earthquake dynamics the model is sheared in a direction parallel to the fault plane with a constant velocity at the driving edges. Spontaneous slip occurs on the fault when the shear stress is large enough to overcome the frictional forces on the fault. Slip events with a wide range of event sizes are observed. Investigation of the temporal evolution and spatial distribution of slip during each event shows a high degree of variability between the events. In some of the larger events highly complex slip patterns are observed.
Resumo:
We have determined the three-dimensional structure of the protein complex between latexin and carboxypeptidase A using a combination of chemical cross-linking, mass spectrometry and molecular docking. The locations of three intermolecular cross-links were identified using mass spectrometry and these constraints were used in combination with a speed-optimised docking algorithm allowing us to evaluate more than 3 x 10(11) possible conformations. While cross-links represent only limited structural constraints, the combination of only three experimental cross-links with very basic molecular docking was sufficient to determine the complex structure. The crystal structure of the complex between latexin and carboxypeptidase A4 determined recently allowed us to assess the success of this structure determination approach. Our structure was shown to be within 4 angstrom r.m.s. deviation of C alpha atoms of the crystal structure. The study demonstrates that cross-linking in combination with mass spectrometry can lead to efficient and accurate structural modelling of protein complexes.