952 resultados para inborn errors of metabolism
Resumo:
Hoy en día es común estudiar los patrones globales de biodiversidad a partir de las predicciones generadas por diferentes modelos de nicho ecológico. Habitualmente, estos modelos se calibran con datos procedentes de bases de datos de libre acceso (e.g. GBIF). Sin embargo, a pesar de la facilidad de descarga y de la accesibilidad de los datos, la información almacenada sobre las localidades donde están presentes las especies suele tener sesgos y errores. Estos problemas en los datos de calibración pueden modificar drásticamente las predicciones de los modelos y con ello pueden enmascarar los patrones macroecológicos reales. El objetivo de este trabajo es investigar qué métodos producen resultados más precisos cuando los datos de calibración incluyen sesgos y cuáles producen mejores resultados cuando los datos de calibración tienen, además de sesgos, errores. Para ello creado una especie virtual, hemos proyectado su distribución en la península ibérica, hemos muestreado su distribución de manera sesgada y hemos calibrado dos tipos de modelos de distribución (Bioclim y Maxent) con muestras de distintos tamaños. Nuestros resultados indican que cuando los datos sólo están sesgados, los resultados de Bioclim son mejores que los de Maxent. Sin embargo, Bioclim es extremadamente sensible a la presencia de errores en los datos de calibración. En estas situaciones, el comportamiento de Maxent es mucho más robusto y las predicciones que proporciona son más ajustadas.
Resumo:
This raster layer represents surface elevation and bathymetry data for the Boston Region, Massachusetts. It was created by merging portions of MassGIS Digital Elevation Model 1:5,000 (2005) data with NOAA Estuarine Bathymetric Digital Elevation Models (30 m.) (1998). DEM data was derived from the digital terrain models that were produced as part of the MassGIS 1:5,000 Black and White Digital Orthophoto imagery project. Cellsize is 5 meters by 5 meters. Each cell has a floating point value, in meters, which represents its elevation above or below sea level.
Resumo:
Tout médicament administré par la voie orale doit être absorbé sans être métabolisé par l’intestin et le foie pour atteindre la circulation systémique. Malgré son impact majeur sur l’effet de premier passage de plusieurs médicaments, le métabolisme intestinal est souvent négligé comparativement au métabolisme hépatique. L’objectif de ces travaux de maîtrise est donc d’utiliser, caractériser et développer différents outils in vitro et in vivo pour mieux comprendre et prédire l’impact du métabolisme intestinal sur l’effet de premier passage des médicaments comparé au métabolisme hépatique. Pour se faire, différents substrats d’enzymes du métabolisme ont été incubés dans des microsomes intestinaux et hépatiques et des différences entre la vitesse de métabolisme et les métabolites produits ont été démontrés. Afin de mieux comprendre l’impact de ces différences in vivo, des études mécanistiques chez des animaux canulés et traités avec des inhibiteurs enzymatiques ont été conduites avec le substrat métoprolol. Ces études ont démontré l’impact du métabolisme intestinal sur le premier passage du métoprolol. De plus, elles ont révélé l’effet sur la vidange gastrique du 1-aminobenzotriazole, un inhibiteur des cytochromes p450, évitant ainsi une mauvaise utilisation de cet outil dans le futur. Ces travaux de maîtrise ont permis d’améliorer les connaissances des différents outils in vitro et in vivo pour étudier le métabolisme intestinal tout en permettant de mieux comprendre les différences entre le rôle de l’intestin et du foie sur l’effet de premier passage.
Resumo:
A collection of miscellaneous pamphlets on finance.
Resumo:
"Compendium of awards of the Compensation Commissioners for the years ... and for the months of ..., together with decisions of the Superior Court on appeal and references to decisions of the Supreme Court of Errors on appeal."
Skeletal muscle and nuclear hormone receptors: Implications for cardiovascular and metabolic disease
Resumo:
Skeletal muscle is a major mass peripheral tissue that accounts for similar to 40% of the total body mass and a major player in energy balance. It accounts for > 30% of energy expenditure, is the primary tissue of insulin stimulated glucose uptake, disposal, and storage. Furthermore, it influences metabolism via modulation of circulating and stored lipid (and cholesterol) flux. Lipid catabolism supplies up to 70% of the energy requirements for resting muscle. However, initial aerobic exercise utilizes stored muscle glycogen but as exercise continues, glucose and stored muscle triglycerides become important energy substrates. Endurance exercise increasingly depends on fatty acid oxidation (and lipid mobilization from other tissues). This underscores the importance of lipid and glucose utilization as an energy source in muscle. Consequently skeletal muscle has a significant role in insulin sensitivity, the blood lipid profile, and obesity. Moreover, caloric excess, obesity and physical inactivity lead to skeletal muscle insulin resistance, a risk factor for the development of type II diabetes. In this context skeletal muscle is an important therapeutic target in the battle against cardiovascular disease, the worlds most serious public health threat. Major risk factors for cardiovascular disease include dyslipidemia, hypertension, obesity, sedentary lifestyle, and diabetes. These risk factors are directly influenced by diet, metabolism and physical activity. Metabolism is largely regulated by nuclear hormone receptors which function as hormone regulated transcription factors that bind DNA and mediate the pathophysiological regulation of gene expression. Metabolism and activity, which directly influence cardiovascular disease risk factors, are primarily driven by skeletal muscle. Recently, many nuclear receptors expressed in skeletal muscle have been shown to improve glucose tolerance, insulin resistance, and dyslipidernia. Skeletal muscle and nuclear receptors are rapidly emerging as critical targets in the battle against cardiovascular disease risk factors. Understanding the function of nuclear receptors in skeletal muscle has enormous pharmacological utility for the treatment of cardiovascular disease. This review focuses on the molecular regulation of metabolism by nuclear receptors in skeletal muscle in the context of dyslipidemia and cardiovascular disease. (c) 2005 Published by Elsevier Ltd.
Resumo:
Purpose: This study was conducted to devise a new individual calibration method to enhance MTI accelerometer estimation of free-living level walking speed. Method: Five female and five male middle-aged adults walked 400 m at 3.5, 4.5, and 5.5 km(.)h(-1), and 800 in at 6.5 km(.)h(-1) on an outdoor track, following a continuous protocol. Lap speed was controlled by a global positioning system (GPS) monitor. MTI counts-to-speed calibration equations were derived for each trial, for each subject for four such trials with each of four MTI, for each subject for the average MTI. and for the pooled data. Standard errors of the estimate (SEE) with and without individual calibration were compared. To assess accuracy of prediction of free-living walking speed, subjects also completed a self-paced, brisk 3-km walk wearing one of the four MTI, and differences between actual and predicted walking speed with and without individual calibration were examined. Results: Correlations between MTI counts and walking speed were 0.90 without individual calibration, 0.98 with individual calibration for the average MTI. and 0.99 with individual calibration for a specific MTI. The SEE (mean +/- SD) was 0.58 +/- 0.30 km(.)h(-1) without individual calibration, 0.19 +/- 0.09 km h(-1) with individual calibration for the average MTI monitor, and 0.16 +/- 0.08 km(.)h(-1) with individual calibration for a specific MTI monitor. The difference between actual and predicted walking speed on the brisk 3-km walk was 0.06 +/- 0.25 km(.)h(-1) using individual calibration and 0.28 +/- 0.63 km(.)h(-1) without individual calibration (for specific accelerometers). Conclusion: MTI accuracy in predicting walking speed without individual calibration might be sufficient for population-based studies but not for intervention trials. This individual calibration method will substantially increase precision of walking speed predicted from MTI counts.
Resumo:
The study examines whether error exposure training can enhance adaptive performance. Fifty-nine experienced fire-fighters undergoing training for incident command participated in the study. War stories were developed based on real events to illustrate successful and unsuccessful incident command decisions. Two training methodologies were compared and evaluated. One group was trained using case studies that depicted incidents containing errors of management with severe consequences in fire-fighting outcomes (error-story training) while a second group was exposed to the same set of case studies except that the case studies depicted the incidents being managed without errors and their consequences (errorless-story training). The results provide some support for the hypothesis that it is better to learn from other people's errors than from their successes. Implications for training are discussed.
Resumo:
This article demonstrates that a commonly-made assumption in quantum yield calculations may produce errors of up to 25% in extreme cases and can be corrected by a simple modification to the analysis.
Resumo:
This work has, as its objective, the development of non-invasive and low-cost systems for monitoring and automatic diagnosing specific neonatal diseases by means of the analysis of suitable video signals. We focus on monitoring infants potentially at risk of diseases characterized by the presence or absence of rhythmic movements of one or more body parts. Seizures and respiratory diseases are specifically considered, but the approach is general. Seizures are defined as sudden neurological and behavioural alterations. They are age-dependent phenomena and the most common sign of central nervous system dysfunction. Neonatal seizures have onset within the 28th day of life in newborns at term and within the 44th week of conceptional age in preterm infants. Their main causes are hypoxic-ischaemic encephalopathy, intracranial haemorrhage, and sepsis. Studies indicate an incidence rate of neonatal seizures of 0.2% live births, 1.1% for preterm neonates, and 1.3% for infants weighing less than 2500 g at birth. Neonatal seizures can be classified into four main categories: clonic, tonic, myoclonic, and subtle. Seizures in newborns have to be promptly and accurately recognized in order to establish timely treatments that could avoid an increase of the underlying brain damage. Respiratory diseases related to the occurrence of apnoea episodes may be caused by cerebrovascular events. Among the wide range of causes of apnoea, besides seizures, a relevant one is Congenital Central Hypoventilation Syndrome (CCHS) \cite{Healy}. With a reported prevalence of 1 in 200,000 live births, CCHS, formerly known as Ondine's curse, is a rare life-threatening disorder characterized by a failure of the automatic control of breathing, caused by mutations in a gene classified as PHOX2B. CCHS manifests itself, in the neonatal period, with episodes of cyanosis or apnoea, especially during quiet sleep. The reported mortality rates range from 8% to 38% of newborn with genetically confirmed CCHS. Nowadays, CCHS is considered a disorder of autonomic regulation, with related risk of sudden infant death syndrome (SIDS). Currently, the standard method of diagnosis, for both diseases, is based on polysomnography, a set of sensors such as ElectroEncephaloGram (EEG) sensors, ElectroMyoGraphy (EMG) sensors, ElectroCardioGraphy (ECG) sensors, elastic belt sensors, pulse-oximeter and nasal flow-meters. This monitoring system is very expensive, time-consuming, moderately invasive and requires particularly skilled medical personnel, not always available in a Neonatal Intensive Care Unit (NICU). Therefore, automatic, real-time and non-invasive monitoring equipments able to reliably recognize these diseases would be of significant value in the NICU. A very appealing monitoring tool to automatically detect neonatal seizures or breathing disorders may be based on acquiring, through a network of sensors, e.g., a set of video cameras, the movements of the newborn's body (e.g., limbs, chest) and properly processing the relevant signals. An automatic multi-sensor system could be used to permanently monitor every patient in the NICU or specific patients at home. Furthermore, a wire-free technique may be more user-friendly and highly desirable when used with infants, in particular with newborns. This work has focused on a reliable method to estimate the periodicity in pathological movements based on the use of the Maximum Likelihood (ML) criterion. In particular, average differential luminance signals from multiple Red, Green and Blue (RGB) cameras or depth-sensor devices are extracted and the presence or absence of a significant periodicity is analysed in order to detect possible pathological conditions. The efficacy of this monitoring system has been measured on the basis of video recordings provided by the Department of Neurosciences of the University of Parma. Concerning clonic seizures, a kinematic analysis was performed to establish a relationship between neonatal seizures and human inborn pattern of quadrupedal locomotion. Moreover, we have decided to realize simulators able to replicate the symptomatic movements characteristic of the diseases under consideration. The reasons is, essentially, the opportunity to have, at any time, a 'subject' on which to test the continuously evolving detection algorithms. Finally, we have developed a smartphone App, called 'Smartphone based contactless epilepsy detector' (SmartCED), able to detect neonatal clonic seizures and warn the user about the occurrence in real-time.
Resumo:
This empirical study examines the extent of non-linearity in a multivariate model of monthly financial series. To capture the conditional heteroscedasticity in the series, both the GARCH(1,1) and GARCH(1,1)-in-mean models are employed. The conditional errors are assumed to follow the normal and Student-t distributions. The non-linearity in the residuals of a standard OLS regression are also assessed. It is found that the OLS residuals as well as conditional errors of the GARCH models exhibit strong non-linearity. Under the Student density, the extent of non-linearity in the GARCH conditional errors was generally similar to those of the standard OLS. The GARCH-in-mean regression generated the worse out-of-sample forecasts.
Resumo:
Purpose To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. Methods The Northern Ireland Childhood Errors of Refraction (NICER) study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. Results Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. Conclusions Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.
Resumo:
As a new medium for questionnaire delivery, the internet has the potential to revolutionise the survey process. Online (web-based) questionnaires provide several advantages over traditional survey methods in terms of cost, speed, appearance, flexibility, functionality, and usability [1, 2]. For instance, delivery is faster, responses are received more quickly, and data collection can be automated or accelerated [1- 3]. Online-questionnaires can also provide many capabilities not found in traditional paper-based questionnaires: they can include pop-up instructions and error messages; they can incorporate links; and it is possible to encode difficult skip patterns making such patterns virtually invisible to respondents. Like many new technologies, however, online-questionnaires face criticism despite their advantages. Typically, such criticisms focus on the vulnerability of online-questionnaires to the four standard survey error types: namely, coverage, non-response, sampling, and measurement errors. Although, like all survey errors, coverage error (“the result of not allowing all members of the survey population to have an equal or nonzero chance of being sampled for participation in a survey” [2, pg. 9]) also affects traditional survey methods, it is currently exacerbated in online-questionnaires as a result of the digital divide. That said, many developed countries have reported substantial increases in computer and internet access and/or are targeting this as part of their immediate infrastructural development [4, 5]. Indicating that familiarity with information technologies is increasing, these trends suggest that coverage error will rapidly diminish to an acceptable level (for the developed world at least) in the near future, and in so doing, positively reinforce the advantages of online-questionnaire delivery. The second error type – the non-response error – occurs when individuals fail to respond to the invitation to participate in a survey or abandon a questionnaire before it is completed. Given today’s societal trend towards self-administration [2] the former is inevitable, irrespective of delivery mechanism. Conversely, non-response as a consequence of questionnaire abandonment can be relatively easily addressed. Unlike traditional questionnaires, the delivery mechanism for online-questionnaires makes estimation of questionnaire length and time required for completion difficult1, thus increasing the likelihood of abandonment. By incorporating a range of features into the design of an online questionnaire, it is possible to facilitate such estimation – and indeed, to provide respondents with context sensitive assistance during the response process – and thereby reduce abandonment while eliciting feelings of accomplishment [6]. For online-questionnaires, sampling error (“the result of attempting to survey only some, and not all, of the units in the survey population” [2, pg. 9]) can arise when all but a small portion of the anticipated respondent set is alienated (and so fails to respond) as a result of, for example, disregard for varying connection speeds, bandwidth limitations, browser configurations, monitors, hardware, and user requirements during the questionnaire design process. Similarly, measurement errors (“the result of poor question wording or questions being presented in such a way that inaccurate or uninterpretable answers are obtained” [2, pg. 11]) will lead to respondents becoming confused and frustrated. Sampling, measurement, and non-response errors are likely to occur when an online-questionnaire is poorly designed. Individuals will answer questions incorrectly, abandon questionnaires, and may ultimately refuse to participate in future surveys; thus, the benefit of online questionnaire delivery will not be fully realized. To prevent errors of this kind2, and their consequences, it is extremely important that practical, comprehensive guidelines exist for the design of online questionnaires. Many design guidelines exist for paper-based questionnaire design (e.g. [7-14]); the same is not true for the design of online questionnaires [2, 15, 16]. The research presented in this paper is a first attempt to address this discrepancy. Section 2 describes the derivation of a comprehensive set of guidelines for the design of online-questionnaires and briefly (given space restrictions) outlines the essence of the guidelines themselves. Although online-questionnaires reduce traditional delivery costs (e.g. paper, mail out, and data entry), set up costs can be high given the need to either adopt and acquire training in questionnaire development software or secure the services of a web developer. Neither approach, however, guarantees a good questionnaire (often because the person designing the questionnaire lacks relevant knowledge in questionnaire design). Drawing on existing software evaluation techniques [17, 18], we assessed the extent to which current questionnaire development applications support our guidelines; Section 3 describes the framework used for the evaluation, and Section 4 discusses our findings. Finally, Section 5 concludes with a discussion of further work.
Resumo:
Application of neural network algorithm for increasing the accuracy of navigation systems are showing. Various navigation systems, where a couple of sensors are used in the same device in different positions and the disturbances act equally on both sensors, the trained neural network can be advantageous for increasing the accuracy of system. The neural algorithm had used for determination the interconnection between the sensors errors in two channels to avoid the unobservation of navigation system. Representation of thermal error of two- component navigation sensors by time model, which coefficients depend only on parameters of the device, its orientations relative to disturbance vector allows to predict thermal errors change, measuring the current temperature and having identified preliminary parameters of the model for the set position. These properties of thermal model are used for training the neural network and compensation the errors of navigation system in non- stationary thermal fields.
Resumo:
P. E. Parvanov - The uniform weighted approximation errors of the Goodman–Sharma operators are characterized for functions.