884 resultados para Survival analysis (Biometry) Mathematical models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Factors associated with survival were studied in 84 neuropathologically documented cases of the pre-senile dementia frontotemporal dementia lobar degeneration (FTLD) with transactive response (TAR) DNA-binding protein of 43 kDa (TDP-43) proteinopathy (FTLD-TDP). Kaplan-Meier survival analysis estimated mean survival as 7.9 years (range: 1-19 years, SD = 4.64). Familial and sporadic cases exhibited similar survival, including progranulin (GRN) gene mutation cases. No significant differences in survival were associated with sex, disease onset, Braak disease stage, or disease subtype, but higher survival was associated with lower post-mortem brain weight. Survival was significantly reduced in cases with associated motor neuron disease (FTLD-MND) but increased with Alzheimer's disease (AD) or hippocampal sclerosis (HS) co-morbidity. Cox regression analysis suggested that reduced survival was associated with increased densities of neuronal cytoplasmic inclusions (NCI) while increased survival was associated with greater densities of enlarged neurons (EN) in the frontal and temporal lobes. The data suggest that: (1) survival in FTLD-TDP is more prolonged than typical in pre-senile dementia but shorter than some clinical subtypes such as the semantic variant of primary progressive aphasia (svPPA), (2) MND co-morbidity predicts poor survival, and (3) NCI may develop early and EN later in the disease. The data have implications for both neuropathological characterization and subtyping of FTLD-TDP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A páronként összehasonlított alternatívák rangsorolásának problémája egyaránt felmerül a szavazáselmélet, a statisztika, a tudománymetria, a pszichológia és a sport területén. A nemzetközi szakirodalom alapján részletesen áttekintjük a megoldási lehetőségeket, bemutatjuk a gyakorlati alkalmazások során fellépő kérdések kezelésének, a valós adatoknak megfelelő matematikai környezet felépítésének módjait. Kiemelten tárgyaljuk a páros összehasonlítási mátrix megadását, az egyes pontozási eljárásokat és azok kapcsolatát. A tanulmány elméleti szempontból vizsgálja a Perron-Frobenius tételen alapuló invariáns, fair bets, PageRank, valamint az irányított gráfok csúcsainak rangsorolásra javasolt internal slackening és pozíciós erő módszereket. A közülük történő választáshoz az axiomatikus megközelítést ajánljuk, ennek keretében bemutatjuk az invariáns és a fair bets eljárások karakterizációját, és kitérünk a módszerek vitatható tulajdonságaira. _____ The ranking of the alternatives or selecting the best one are fundamental issues of social choice theory, statistics, psychology and sport. Different solution concepts, and various mathematical models of applications are reviewed based on the international literature. We are focusing on the de¯nition of paired comparison matrix, on main scoring procedures and their relation. The paper gives a theoretical analysis of the invariant, fair bets and PageRank methods, which are founded on Perron-Frobenius theorem, as well as the internal slackening and positional power procedures used for ranking the nodes of a directed graph. An axiomatic approach is proposed for the choice of an appropriate method. Besides some known characterizations for the invariant and fair bets methods, we also discuss the violation of some properties, meaning their main weakness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a new set of frost property measurement techniques to be used in the control of frost growth and defrosting processes in refrigeration systems was investigated. Holographic interferometry and infrared thermometry were used to measure the temperature of the frost-air interface, while a beam element load sensor was used to obtain the weight of a deposited frost layer. The proposed measurement techniques were tested for the cases of natural and forced convection, and the characteristic charts were obtained for a set of operational conditions. ^ An improvement of existing frost growth mathematical models was also investigated. The early stage of frost nucleation was commonly not considered in these models and instead an initial value of layer thickness and porosity was regularly assumed. A nucleation model to obtain the droplet diameter and surface porosity at the end of the early frosting period was developed. The drop-wise early condensation in a cold flat plate under natural convection to a hot (room temperature) and humid air was modeled. A nucleation rate was found, and the relation of heat to mass transfer (Lewis number) was obtained. It was found that the Lewis number was much smaller than unity, which is the standard value usually assumed for most frosting numerical models. The nucleation model was validated against available experimental data for the early nucleation and full growth stages of the frosting process. ^ The combination of frost top temperature and weight variation signals can now be used to control the defrosting timing and the developed early nucleation model can now be used to simulate the entire process of frost growth in any surface material. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The principal effluent in the oil industry is the produced water, which is commonly associated to the produced oil. It presents a pronounced volume of production and it can be reflected on the environment and society, if its discharge is unappropriated. Therefore, it is indispensable a valuable careful to establish and maintain its management. The traditional treatment of produced water, usualy includes both tecniques, flocculation and flotation. At flocculation processes, there are traditional floculant agents that aren’t well specified by tecnichal information tables and still expensive. As for the flotation process, it’s the step in which is possible to separate the suspended particles in the effluent. The dissolved air flotation (DAF) is a technique that has been consolidating economically and environmentally, presenting great reliability when compared with other processes. The DAF is presented as a process widely used in various fields of water and wastewater treatment around the globe. In this regard, this study was aimed to evaluate the potential of an alternative natural flocculant agent based on Moringa oleifera to reduce the amount of oil and grease (TOG) in produced water from the oil industry by the method of flocculation/DAF. the natural flocculant agent was evaluated by its efficacy, as well as its efficiency when compared with two commercial flocculant agents normally used by the petroleum industry. The experiments were conducted following an experimental design and the overall efficiencies for all flocculants were treated through statistical calculation based on the use of STATISTICA software version 10.0. Therefore, contour surfaces were obtained from the experimental design and were interpreted in terms of the response variable removal efficiency TOG (total oil and greases). The plan still allowed to obtain mathematical models for calculating the response variable in the studied conditions. Commercial flocculants showed similar behavior, with an average overall efficiency of 90% for oil removal, however it is the economical analysis the decisive factor to choose one of these flocculant agents to the process. The natural alternative flocculant agent based on Moringa oleifera showed lower separation efficiency than those of commercials one (average 70%), on the other hand this flocculant causes less environmental impacts and it´s less expensive

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nel presente lavoro, ho studiato e trovato le soluzioni esatte di un modello matematico applicato ai recettori cellulari della famiglia delle integrine. Nel modello le integrine sono considerate come un sistema a due livelli, attivo e non attivo. Quando le integrine si trovano nello stato inattivo possono diffondere nella membrana, mentre quando si trovano nello stato attivo risultano cristallizzate nella membrana, incapaci di diffondere. La variazione di concentrazione nella superficie cellulare di una sostanza chiamata attivatore dà luogo all’attivazione delle integrine. Inoltre, questi eterodimeri possono legare una molecola inibitrice con funzioni di controllo e regolazione, che chiameremo v, la quale, legandosi al recettore, fa aumentare la produzione della sostanza attizzatrice, che chiameremo u. In questo modo si innesca un meccanismo di retroazione positiva. L’inibitore v regola il meccanismo di produzione di u, ed assume, pertanto, il ruolo di modulatore. Infatti, grazie a questo sistema di fine regolazione il meccanismo di feedback positivo è in grado di autolimitarsi. Si costruisce poi un modello di equazioni differenziali partendo dalle semplici reazioni chimiche coinvolte. Una volta che il sistema di equazioni è impostato, si possono desumere le soluzioni per le concentrazioni dell’inibitore e dell’attivatore per un caso particolare dei parametri. Infine, si può eseguire un test per vedere cosa predice il modello in termini di integrine. Per farlo, ho utilizzato un’attivazione del tipo funzione gradino e l’ho inserita nel sistema, valutando la dinamica dei recettori. Si ottiene in questo modo un risultato in accordo con le previsioni: le integrine legate si trovano soprattutto ai limiti della zona attivata, mentre le integrine libere vengono a mancare nella zona attivata.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims to evaluate the uncertainty associated with measurements made by aneroid sphygmomanometer, neonatal electronic balance and electrocautery. Therefore, were performing repeatability tests on all devices for the subsequent execution of normality tests using Shapiro-Wilk; identification of influencing factors that affect the measurement result of each measurement; proposition of mathematical models to calculate the measurement uncertainty associated with measuring evaluated for all equipament and calibration for neonatal electronic balance; evaluation of the measurement uncertainty; and development of a computer program in Java language to systematize the calibration uncertainty of estimates and measurement uncertainty. It was proposed and carried out 23 factorial design for aneroid sphygmomanometer order to investigate the effect of temperature factors, patient and operator and another 32 planning for electrocautery, where it investigated the effects of temperature factors and output electrical power. The expanded uncertainty associated with the measurement of blood pressure significantly reduced the extent of the patient classification tracks. In turn, the expanded uncertainty associated with the mass measurement with neonatal balance indicated a variation of about 1% in the dosage of medication to neonates. Analysis of variance (ANOVA) and the Turkey test indicated significant and indirectly proportional effects of temperature factor in cutting power values and clotting indicated by electrocautery and no significant effect of factors investigated for aneroid sphygmomanometer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A landfill represents a complex and dynamically evolving structure that can be stochastically perturbed by exogenous factors. Both thermodynamic (equilibrium) and time varying (non-steady state) properties of a landfill are affected by spatially heterogenous and nonlinear subprocesses that combine with constraining initial and boundary conditions arising from the associated surroundings. While multiple approaches have been made to model landfill statistics by incorporating spatially dependent parameters on the one hand (data based approach) and continuum dynamical mass-balance equations on the other (equation based modelling), practically no attempt has been made to amalgamate these two approaches while also incorporating inherent stochastically induced fluctuations affecting the process overall. In this article, we will implement a minimalist scheme of modelling the time evolution of a realistic three dimensional landfill through a reaction-diffusion based approach, focusing on the coupled interactions of four key variables - solid mass density, hydrolysed mass density, acetogenic mass density and methanogenic mass density, that themselves are stochastically affected by fluctuations, coupled with diffusive relaxation of the individual densities, in ambient surroundings. Our results indicate that close to the linearly stable limit, the large time steady state properties, arising out of a series of complex coupled interactions between the stochastically driven variables, are scarcely affected by the biochemical growth-decay statistics. Our results clearly show that an equilibrium landfill structure is primarily determined by the solid and hydrolysed mass densities only rendering the other variables as statistically "irrelevant" in this (large time) asymptotic limit. The other major implication of incorporation of stochasticity in the landfill evolution dynamics is in the hugely reduced production times of the plants that are now approximately 20-30 years instead of the previous deterministic model predictions of 50 years and above. The predictions from this stochastic model are in conformity with available experimental observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dengue is an important vector-borne virus that infects on the order of 400 million individuals per year. Infection with one of the virus's four serotypes (denoted DENV-1 to 4) may be silent, result in symptomatic dengue 'breakbone' fever, or develop into the more severe dengue hemorrhagic fever/dengue shock syndrome (DHF/DSS). Extensive research has therefore focused on identifying factors that influence dengue infection outcomes. It has been well-documented through epidemiological studies that DHF is most likely to result from a secondary heterologous infection, and that individuals experiencing a DENV-2 or DENV-3 infection typically are more likely to present with more severe dengue disease than those individuals experiencing a DENV-1 or DENV-4 infection. However, a mechanistic understanding of how these risk factors affect disease outcomes, and further, how the virus's ability to evolve these mechanisms will affect disease severity patterns over time, is lacking. In the second chapter of my dissertation, I formulate mechanistic mathematical models of primary and secondary dengue infections that describe how the dengue virus interacts with the immune response and the results of this interaction on the risk of developing severe dengue disease. I show that only the innate immune response is needed to reproduce characteristic features of a primary infection whereas the adaptive immune response is needed to reproduce characteristic features of a secondary dengue infection. I then add to these models a quantitative measure of disease severity that assumes immunopathology, and analyze the effectiveness of virological indicators of disease severity. In the third chapter of my dissertation, I then statistically fit these mathematical models to viral load data of dengue patients to understand the mechanisms that drive variation in viral load. I specifically consider the roles that immune status, clinical disease manifestation, and serotype may play in explaining viral load variation observed across the patients. With this analysis, I show that there is statistical support for the theory of antibody dependent enhancement in the development of severe disease in secondary dengue infections and that there is statistical support for serotype-specific differences in viral infectivity rates, with infectivity rates of DENV-2 and DENV-3 exceeding those of DENV-1. In the fourth chapter of my dissertation, I integrate these within-host models with a vector-borne epidemiological model to understand the potential for virulence evolution in dengue. Critically, I show that dengue is expected to evolve towards intermediate virulence, and that the optimal virulence of the virus depends strongly on the number of serotypes that co-circulate. Together, these dissertation chapters show that dengue viral load dynamics provide insight into the within-host mechanisms driving differences in dengue disease patterns and that these mechanisms have important implications for dengue virulence evolution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.

Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Delirium is highly prevalent, especially in older patients. It independently leads to adverse outcomes, but remains under-detected, particularly hypoactive forms. Although early identification and intervention is important, delirium prevention is key to improving outcomes. The delirium prodrome concept has been mooted for decades, but remains poorly characterised. Greater understanding of this prodrome would promote prompt identification of delirium-prone patients, and facilitate improved strategies for delirium prevention and management. Methods Medical inpatients of ≥70 years were screened for prevalent delirium using the Revised Delirium Rating Scale (DRS--‐R98). Those without prevalent delirium were assessed daily for delirium development, prodromal features and motor subtype. Survival analysis models identified which prodromal features predicted the emergence of incident delirium in the cohort in the first week of admission. The Delirium Motor Subtype Scale-4 was used to ascertain motor subtype. Results Of 555 patients approached, 191 patients were included in the prospective study. The median age was 80 (IQR 10) and 101 (52.9%) were male. Sixty-one patients developed incident delirium within a week of admission. Several prodromal features predicted delirium emergence in the cohort. Firstly, using a novel Prodromal Checklist based on the existing literature, and controlling for confounders, seven predictive behavioural features were identified in the prodromal period (for example, increasing confusion; and being easily distractible). Additionally, using serial cognitive tests and the DRS-R98 daily, multiple cognitive and other core delirium features were detected in the prodrome (for example inattention; and sleep-wake cycle disturbance). Examining longitudinal motor subtypes in delirium cases, subtypes were found to be predominantly stable over time, the most prevalent being hypoactive subtype (62.3%). Discussion This thesis explored multiple aspects of delirium in older medical inpatients, with particular focus on the characterisation of the delirium prodrome. These findings should help to inform future delirium educational programmes, and detection and prevention strategies.