923 resultados para MEASURING METHODS
Resumo:
Using Tinto's (1987) social integration theory as a framework, this study measured student satisfaction in six transformative areas: educational experience, skills development, faculty interaction, personal growth, sense of community, and overall expectations. Emerging as a strategic planning process priority, this project sought to identify those areas where students succeeded or were at risk. Employing a three-phase mixed methods approach, this descriptive, longitudinal study was conducted from 1990-2004 at a highly selective specialized college and assisted college administrators in developing or modifying programs that would enhance student satisfaction to ensure degree completion.
Resumo:
Measuring the level of an economy.s potential output and output gap are essential in identifying a sustainable non-inflationary growth and assessing appropriate macroeconomic policies. The estimation of potential output helps to determine the pace of sustainable growth while output gap estimates provide a key benchmark against which to assess inflationary or disinflationary pressures suggesting when to tighten or ease monetary policies. These measures also help to provide a gauge in the determining the structural fiscal position of the government. This paper attempts to measure Kenya.s potential output and output gap using alternative statistical techniques and structural methods. Estimation of potential output and output gap using these techniques shows varied results. The estimated potential output growth using different methods gave a range of .2.9 to 2.4 percent for 2000 and a range of .0.8 to 4.6 for 2001. Although various methods produce varied results, they however provided a broad consensus on the over-all trend and performance of the Kenyan economy. This study found that firstly, potential output growth is declining over the recent time and secondly, the Kenyan economy is contracting in the recent years.
Resumo:
Two studies among college students were conducted to evaluate appropriate measurement methods for etiological research on computing-related upper extremity musculoskeletal disorders (UEMSDs). ^ A cross-sectional study among 100 graduate students evaluated the utility of symptoms surveys (a VAS scale and 5-point Likert scale) compared with two UEMSD clinical classification systems (Gerr and Moore protocols). The two symptom measures were highly concordant (Lin's rho = 0.54; Spearman's r = 0.72); the two clinical protocols were moderately concordant (Cohen's kappa = 0.50). Sensitivity and specificity, endorsed by Youden's J statistic, did not reveal much agreement between the symptoms surveys and clinical examinations. It cannot be concluded self-report symptoms surveys can be used as surrogate for clinical examinations. ^ A pilot repeated measures study conducted among 30 undergraduate students evaluated computing exposure measurement methods. Key findings are: temporal variations in symptoms, the odds of experiencing symptoms increased with every hour of computer use (adjOR = 1.1, p < .10) and every stretch break taken (adjOR = 1.3, p < .10). When measuring posture using the Computer Use Checklist, a positive association with symptoms was observed (adjOR = 1.3, p < 0.10), while measuring posture using a modified Rapid Upper Limb Assessment produced unexpected and inconsistent associations. The findings were inconclusive in identifying an appropriate posture assessment or superior conceptualization of computer use exposure. ^ A cross-sectional study of 166 graduate students evaluated the comparability of graduate students to College Computing & Health surveys administered to undergraduate students. Fifty-five percent reported computing-related pain and functional limitations. Years of computer use in graduate school and number of years in school where weekly computer use was ≥ 10 hours were associated with pain within an hour of computing in logistic regression analyses. The findings are consistent with current literature on both undergraduate and graduate students. ^
Resumo:
Context: Black women are reported to have a higher prevalence of uterine fibroids, and a threefold higher incidence rate and relative risk for clinical uterine fibroid development as compared to women of other races. Uterine fibroid research has reported that black women experience greater uterine fibroid morbidity and disproportionate uterine fibroid disease burden. With increased interest in understanding uterine fibroid development, and race being a critical component of uterine fibroid assessment, it is imperative that the methods used to determine the race of research participants is defined and the operational definition of the use of race as a variable is reported for methodological guidance, and to enable the research community to compare statistical data and replicate studies. ^ Objectives: To systematically review and evaluate the methods used to assess race and racial disparities in uterine fibroid research. ^ Data Sources: Databases searched for this review include: OVID Medline, NML PubMed, Ebscohost Cumulative Index to Nursing and Allied Health Plus with Full Text, and Elsevier Scopus. ^ Review Methods: Articles published in English were retrieved from data sources between January 2011 and March 2011. Broad search terms, uterine fibroids and race, were employed to retrieve a comprehensive list of citations for review screening. The initial database yield included 947 articles, after duplicate extraction 485 articles remained. In addition, 771 bibliographic citations were reviewed to identify additional articles not found through the primary database search, of which 17 new articles were included. In the first screening, 502 titles and abstracts were screened against eligibility questions to determine citations of exclusion and to retrieve full text articles for review. In the second screening, 197 full texted articles were screened against eligibility questions to determine whether or not they met full inclusion/exclusion criteria. ^ Results: 100 articles met inclusion criteria and were used in the results of this systematic review. The evidence suggested that black women have a higher prevalence of uterine fibroids when compared to white women. None of the 14 studies reporting data on prevalence reported an operational definition or conceptual framework for the use of race. There were a limited number of studies reporting on the prevalence of risk factors among racial subgroups. Of the 3 studies, 2 studies reported prevalence of risk factors lower for black women than other races, which was contrary to hypothesis. And, of the three studies reporting on prevalence of risk factors among racial subgroups, none of them reported a conceptual framework for the use of race. ^ Conclusion: In the 100 uterine fibroid studies included in this review over half, 66%, reported a specific objective to assess and recruit study participants based upon their race and/or ethnicity, but most, 51%, failed to report a method of determining the actual race of the participants, and far fewer, 4% (only four South American studies), reported a conceptual framework and/or operational definition of race as a variable. However, most, 95%, of all studies reported race-based health outcomes. The inadequate methodological guidance on the use of race in uterine fibroid studies, purporting to assess race and racial disparities, may be a primary reason that uterine fibroid research continues to report racial disparities, but fails to understand the high prevalence and increased exposures among African-American women. A standardized method of assessing race throughout uterine fibroid research would appear to be helpful in elucidating what race is actually measuring, and the risk of exposures for that measurement. ^
Resumo:
While most professionals do not dispute the fact that evaluation is necessary to determine whether agencies and practitioners are truly providing services that meet clients’ needs, information regarding consistent measures on service effectiveness in human service organizations is sparse. A national survey of 250 not-for-profit family service organizations in the United States (52.8% return rate) yielded results relevant to client identified needs and agency effectiveness measures in serving today’s families. On an open-ended survey item, 52.3% agencies indicated that poverty represented the most pressing problem among today’s families because other psychological needs also take priority. Over two thirds of these agencies used multiple methods to evaluate their services. Clients’ feedback and outcome measures are the most popular methods. The findings reveal agencies' difficulties in determining what or who decides if the most appropriate services are being provided for the target population. Limited data collected on outcomes and impact may impose additional difficulties in program design and planning.
Resumo:
Firms that are expanding their cross-border activities, such as vertical specialization trade, outsourcing, and fragmentation productions, have brought dramatic changes to the global economy during the last two decades. In an attempt to understand the evolution of the interaction among countries or country groups, many trade-statistics-based indicators have been developed. However, most of these statistics focus on showing the direct trade-specific-relationship among countries, rather than considering the roles that intercountry and interindustrial production networks play in a global economy. This paper uses the concepts of trade in value added as measured by the input–output tables of OECD and IDE-JETRO to provide alternative indicators that show the evolution of regional economic integration and global value chains for more than 50 economies. In addition, this paper provides thoughts on how to evaluate comparative advantages on the basis of value added using an international input–output model.
Resumo:
Training and assessment paradigms for laparoscopic surgical skills are evolving from traditional mentor–trainee tutorship towards structured, more objective and safer programs. Accreditation of surgeons requires reaching a consensus on metrics and tasks used to assess surgeons’ psychomotor skills. Ongoing development of tracking systems and software solutions has allowed for the expansion of novel training and assessment means in laparoscopy. The current challenge is to adapt and include these systems within training programs, and to exploit their possibilities for evaluation purposes. This paper describes the state of the art in research on measuring and assessing psychomotor laparoscopic skills. It gives an overview on tracking systems as well as on metrics and advanced statistical and machine learning techniques employed for evaluation purposes. The later ones have a potential to be used as an aid in deciding on the surgical competence level, which is an important aspect when accreditation of the surgeons in particular, and patient safety in general, are considered. The prospective of these methods and tools make them complementary means for surgical assessment of motor skills, especially in the early stages of training. Successful examples such as the Fundamentals of Laparoscopic Surgery should help drive a paradigm change to structured curricula based on objective parameters. These may improve the accreditation of new surgeons, as well as optimize their already overloaded training schedules.
Resumo:
Laminatedglass is composed of two glass layers and a thin intermediate PVB layer, strongly influencing PVB's viscoelastic behaviour its dynamic response. While natural frequencies are relatively easily identified even with simplified FE models, damping ratios are not identified with such an ease. In order to determine to what extent external factors influence dampingidentification, different tests have been carried out. The external factors considered, apart from temperature, are accelerometers, connection cables and the effect of the glass layers. To analyse the influence of the accelerometers and their connection cables a laser measuring device was employed considering three possibilities: sample without instrumentation, sample with the accelerometers fixed and sample completely instrumented. When the sample is completely instrumented, accelerometer readings are also analysed. To take into consideration the effect of the glass layers, tests were realised both for laminatedglass and monolithic samples. This paper presents in depth data analysis of the different configurations and establishes criteria for data acquisition when testing laminatedglass.
Resumo:
Quantitative descriptive analysis (QDA) is used to describe the nature and the intensity of sensory properties from a single evaluation of a product, whereas temporal dominance of sensation (TDS) is primarily used to identify dominant sensory properties over time. Previous studies with TDS have focused on model systems, but this is the first study to use a sequential approach, i.e. QDA then TDS in measuring sensory properties of a commercial product category, using the same set of trained assessors (n = 11). The main objectives of this study were to: (1) investigate the benefits of using a sequential approach of QDA and TDS and (2) to explore the impact of the sample composition on taste and flavour perceptions in blackcurrant squashes. The present study has proposed an alternative way of determining the choice of attributes for TDS measurement based on data obtained from previous QDA studies, where available. Both methods indicated that the flavour profile was primarily influenced by the level of dilution and complexity of sample composition combined with blackcurrant juice content. In addition, artificial sweeteners were found to modify the quality of sweetness and could also contribute to bitter notes. Using QDA and TDS in tandem was shown to be more beneficial than each just on its own enabling a more complete sensory profile of the products.
Resumo:
Measuring skin temperature (TSK) provides important information about the complex thermal control system and could be interesting when carrying out studies about thermoregulation. The most common method to record TSK involves thermocouples at specific locations; however, the use of infrared thermal imaging (IRT) has increased. The two methods use different physical processes to measure TSK, and each has advantages and disadvantages. Therefore, the objective of this study was to compare the mean skin temperature (MTSK) measurements using thermocouples and IRT in three different situations: pre-exercise, exercise and post-exercise. Analysis of the residual scores in Bland-Altman plots showed poor agreement between the MTSK obtained using thermocouples and those using IRT. The averaged error was -0.75 °C during pre-exercise, 1.22 °C during exercise and -1.16 °C during post-exercise, and the reliability between the methods was low in the pre- (ICC = 0.75 [0.12 to 0.93]), during (ICC = 0.49 [-0.80 to 0.85]) and post-exercise (ICC = 0.35 [-1.22 to 0.81] conditions. Thus, there is poor correlation between the values of MTSK measured by thermocouples and IRT pre-exercise, exercise and post-exercise, and low reliability between the two forms of measurement.
Resumo:
La actividad volcánica interviene en multitud de facetas de la propia actividad humana, no siempre negativas. Sin embargo, son más los motivos de peligrosidad y riesgo que incitan al estudio de la actividad volcánica. Existen razones de seguridad que inciden en el mantenimiento del seguimiento y monitorización de la actividad volcánica para garantizar la vida y la seguridad de los asentamientos antrópicos en las proximidades de los edificios volcánicos. En esta tesis se define e implementa un sistema de monitorización de movimientos de la corteza en las islas de Tenerife y La Palma, donde el impacto social que representa un aumento o variación de la actividad volcánica en las islas es muy severo. Aparte de la alta densidad demográfica del Archipiélago, esta población aumenta significativamente, en diferentes periodos a lo largo del año, debido a la actividad turística que representa la mayor fuente de ingresos de las islas. La población y los centros turísticos se diseminan predominantemente a lo largo de las costas y también a lo largo de los flancos de los edificios volcánicos. Quizá el mantenimiento de estas estructuras sociales y socio-económicas son los motivos más importantes que justifican una monitorización de la actividad volcánica en las Islas Canarias. Recientemente se ha venido trabajando cada vez más en el intento de predecir la actividad volcánica utilizando los nuevos sistemas de monitorización geodésica, puesto que la actividad volcánica se manifiesta anteriormente por deformación de la corteza terrestre y cambios en la fuerza de la gravedad en la zona donde más tarde se registran eventos volcánicos. Los nuevos dispositivos y sensores que se han desarrollado en los últimos años en materias como la geodesia, la observación de la Tierra desde el espacio y el posicionamiento por satélite, han permitido observar y medir tanto la deformación producida en el terreno como los cambios de la fuerza de la gravedad antes, durante y posteriormente a los eventos volcánicos que se producen. Estos nuevos dispositivos y sensores han cambiado las técnicas o metodologías geodésicas que se venían utilizando hasta la aparición de los mismos, renovando métodos clásicos y desarrollando otros nuevos que ya se están afianzando como metodologías probadas y reconocidas para ser usadas en la monitorización volcánica. Desde finales de la década de los noventa del siglo pasado se han venido desarrollando en las Islas Canarias varios proyectos que han tenido como objetivos principales el desarrollo de nuevas técnicas de observación y monitorización por un lado y el diseño de una metodología de monitorización volcánica adecuada, por otro. Se presenta aquí el estudio y desarrollo de técnicas GNSS para la monitorización de deformaciones corticales y su campo de velocidades para las islas de Tenerife y La Palma. En su implementación, se ha tenido en cuenta el uso de la infraestructura geodésica y de monitorización existente en el archipiélago a fin de optimizar costes, además de complementarla con nuevas estaciones para dar una cobertura total a las dos islas. Los resultados obtenidos en los proyectos, que se describen en esta memoria, han dado nuevas perspectivas en la monitorización geodésica de la actividad volcánica y nuevas zonas de interés que anteriormente no se conocían en el entorno de las Islas Canarias. Se ha tenido especial cuidado en el tratamiento y propagación de los errores durante todo el proceso de observación, medida y proceso de los datos registrados, todo ello en aras de cuantificar el grado de fiabilidad de los resultados obtenidos. También en este sentido, los resultados obtenidos han sido verificados con otros procedentes de sistemas de observación radar de satélite, incorporando además a este estudio las implicaciones que el uso conjunto de tecnologías radar y GNSS tendrán en un futuro en la monitorización de deformaciones de la corteza terrestre. ABSTRACT Volcanic activity occurs in many aspects of human activity, and not always in a negative manner. Nonetheless, research into volcanic activity is more likely to be motivated by its danger and risk. There are security reasons that influence the monitoring of volcanic activity in order to guarantee the life and safety of human settlements near volcanic edifices. This thesis defines and implements a monitoring system of movements in the Earth’s crust in the islands of Tenerife and La Palma, where the social impact of an increase (or variation) of volcanic activity is very severe. Aside from the high demographic density of the archipelago, the population increases significantly in different periods throughout the year due to tourism, which represents a major source of revenue for the islands. The population and the tourist centres are mainly spread along the coasts and also along the flanks of the volcanic edifices. Perhaps the preservation of these social and socio-economic structures is the most important reason that justifies monitoring volcanic activity in the Canary Islands. Recently more and more work has been done with the intention of predicting volcanic activity, using new geodesic monitoring systems, since volcanic activity is evident prior to eruption because of a deformation of the Earth’s crust and changes in the force of gravity in the zone where volcanic events will later be recorded. The new devices and sensors that have been developed in recent years in areas such as geodesy, the observation of the Earth from space, and satellite positioning have allowed us to observe and measure the deformation produced in the Earth as well as the changes in the force of gravity before, during, and after the volcanic events occur. The new devices and sensors have changed the geodetic techniques and methodologies that were used previously. The classic methods have been renovated and other newer ones developed that are now vouched for as proven recognised methodologies to be used for volcanic monitoring. Since the end of the 1990s, in the Canary Islands various projects have been developed whose principal aim has been the development of new observation and monitoring techniques on the one hand, and the design of an appropriate volcanic monitoring methodology on the other. The study and development of GNSS techniques for the monitoring of crustal deformations and their velocity field is presented here. To carry out the study, the use of geodetic infrastructure and existing monitoring in the archipelago have been taken into account in order to optimise costs, besides complementing it with new stations for total coverage on both islands. The results obtained in the projects, which are described below, have produced new perspectives in the geodetic monitoring of volcanic activity and new zones of interest which previously were unknown in the environment of the Canary Islands. Special care has been taken with the treatment and propagation of errors during the entire process of observing, measuring, and processing the recorded data. All of this was done in order to quantify the degree of trustworthiness of the results obtained. Also in this sense, the results obtained have been verified with others from satellite radar observation systems, incorporating as well in this study the implications that the joint use of radar technologies and GNSS will have for the future of monitoring deformations in the Earth’s crust.
Resumo:
Accurate and automated methods for measuring the thickness of human cerebral cortex could provide powerful tools for diagnosing and studying a variety of neurodegenerative and psychiatric disorders. Manual methods for estimating cortical thickness from neuroimaging data are labor intensive, requiring several days of effort by a trained anatomist. Furthermore, the highly folded nature of the cortex is problematic for manual techniques, frequently resulting in measurement errors in regions in which the cortical surface is not perpendicular to any of the cardinal axes. As a consequence, it has been impractical to obtain accurate thickness estimates for the entire cortex in individual subjects, or group statistics for patient or control populations. Here, we present an automated method for accurately measuring the thickness of the cerebral cortex across the entire brain and for generating cross-subject statistics in a coordinate system based on cortical anatomy. The intersubject standard deviation of the thickness measures is shown to be less than 0.5 mm, implying the ability to detect focal atrophy in small populations or even individual subjects. The reliability and accuracy of this new method are assessed by within-subject test–retest studies, as well as by comparison of cross-subject regional thickness measures with published values.
Resumo:
In this review, the status of measurements of the matter density (Ωm), the vacuum energy density or cosmological constant (ΩΛ), the Hubble constant (H0), and the ages of the oldest measured objects (t0) are summarized. Three independent types of methods for measuring the Hubble constant are considered: the measurement of time delays in multiply imaged quasars, the Sunyaev–Zel’dovich effect in clusters, and Cepheid-based extragalactic distances. Many recent independent dynamical measurements are yielding a low value for the matter density (Ωm ≈ 0.2–0.3). A wide range of Hubble constant measurements appear to be converging in the range of 60–80 km/sec per megaparsec. Areas where future improvements are likely to be made soon are highlighted—in particular, measurements of anisotropies in the cosmic microwave background. Particular attention is paid to sources of systematic error and the assumptions that underlie many of the measurement methods.
Resumo:
Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.
Resumo:
The objectives of this research dissertation were to develop and present novel analytical methods for the quantification of surface binding interactions between aqueous nanoparticles and water-soluble organic solutes. Quantification of nanoparticle surface interactions are presented in this work as association constants where the solutes have interacted with the surface of the nanoparticles. By understanding these nanoparticle-solute interactions, in part through association constants, the scientific community will better understand how organic drugs and nanomaterials interact in the environment, as well as to understand their eventual environmental fate. The biological community, pharmaceutical, and consumer product industries also have vested interests in nanoparticle-drug interactions for nanoparticle toxicity research and in using nanomaterials as drug delivery vesicles. The presented novel analytical methods, applied to nanoparticle surface association chemistry, may prove to be useful in assisting the scientific community to understand the risks, benefits, and opportunities of nanoparticles. The development of the analytical methods presented uses a model nanoparticle, Laponite-RD (LRD). LRD was the proposed nanoparticle used to model the system and technique because of its size, 25 nm in diameter. The solutes selected to model for these studies were chosen because they are also environmentally important. Caffeine, oxytetracycline (OTC), and quinine were selected to use as models because of their environmental importance and chemical properties that can be exploited in the system. All of these chemicals are found in the environment; thus, how they interact with nanoparticles and are transported through the environment is important. The analytical methods developed utilize and a wide-bore hydrodynamic chromatography to induce a partial hydrodynamic separation between nanoparticles and dissolved solutes. Then, using deconvolution techniques, two separate elution profiles for the nanoparticle and organic solute can be obtained. Followed by a mass balance approach, association constants between LRD, our model nanoparticle, and organic solutes are calculated. These findings are the first of their kind for LRD and nanoclays in dilute dispersions.