974 resultados para Bivariate Exponential


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies have noted that vertex degree in the autonomous system (AS) graph exhibits a highly variable distribution [15, 22]. The most prominent explanatory model for this phenomenon is the Barabási-Albert (B-A) model [5, 2]. A central feature of the B-A model is preferential connectivity—meaning that the likelihood a new node in a growing graph will connect to an existing node is proportional to the existing node’s degree. In this paper we ask whether a more general explanation than the B-A model, and absent the assumption of preferential connectivity, is consistent with empirical data. We are motivated by two observations: first, AS degree and AS size are highly correlated [11]; and second, highly variable AS size can arise simply through exponential growth. We construct a model incorporating exponential growth in the size of the Internet, and in the number of ASes. We then show via analysis that such a model yields a size distribution exhibiting a power-law tail. In such a model, if an AS’s link formation is roughly proportional to its size, then AS degree will also show high variability. We instantiate such a model with empirically derived estimates of growth rates and show that the resulting degree distribution is in good agreement with that of real AS graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The isomorphisms holding in all models of the simply typed lambda calculus with surjective and terminal objects are well studied - these models are exactly the Cartesian closed categories. Isomorphism of two simple types in such a model is decidable by reduction to a normal form and comparison under a finite number of permutations (Bruce, Di Cosmo, and Longo 1992). Unfortunately, these normal forms may be exponentially larger than the original types so this construction decides isomorphism in exponential time. We show how using space-sharing/hash-consing techniques and memoization can be used to decide isomorphism in practical polynomial time (low degree, small hidden constant). Other researchers have investigated simple type isomorphism in relation to, among other potential applications, type-based retrieval of software modules from libraries and automatic generation of bridge code for multi-language systems. Our result makes such potential applications practically feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract: The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code. We consider an Ambient Calculus where ambients transport and exchange programs rather that just inert data. We propose different senses in which such a calculus can be said to be polymorphically typed, and design accordingly a polymorphic type system for it. Our type system assigns types to embedded programs and what we call behaviors to processes; a denotational semantics of behaviors is then proposed, here called trace semantics, underlying much of the remaining analysis. We state and prove a Subject Reduction property for our polymorphically typed calculus. Based on techniques borrowed from finite automata theory, type-checking of fully type-annotated processes is shown to be decidable; the time complexity of our decision procedure is exponential (this is a worst-case in theory, arguably not encountered in practice). Our polymorphically-typed calculus is a conservative extension of the typed Ambient Calculus originally proposed by Cardelli and Gordon.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The response of Lactococcus lactis subsp. cremoris NCDO 712 to low water activity (aw) was investigated, both in relation to growth following moderate reductions in the aw and in terms of survival following substantial reduction of the aw with NaCI. Lc.lactis NCDO 712 was capable of growth in the presence of ≤ 4% w/v NaCI and concentrations in excess of 4% w/v were lethal to the cells. The presence of magnesium ions significantly increased the resistance of NCDO 712 to challenge with NaCI and also to challenge with high temperature or low pH. Survival of Lc.lactis NCDO 712 exposed to high NaCI concentrations was growth phase dependent and cells were most sensitive in the early exponential phase of growth. Pre-exposure to 3% w/v NaCI induced limited protection against subsequent challenge with higher NaCI concentrations. The induction was inhibited by chloramphenicol and even when induced, the response did not protect against NaCI concentrations> 10% w/v. When growing at low aw, potassium was accumulated by Lc. lactis NCDO 712 growing at low aw, if the aw was reduced by glucose or fructose, but not by NaCI. Reducing the potassium concentration of chemically defined medium from 20 to 0.5 mM) produced a substantial reduction in the growth rate, if the aw was reduced with NaCI, but not with glucose or fructose. The reduction of the growth rate correlated strongly with a reduction in the cytoplasmic potassium concentration and in cell volume. Addition of the compatible solute glycine betaine, partially reversed the inhibition of growth rate and partially restored the cell volume. The potassium transport system was characterised in cells grown in medium at both high and low aw. It appeared that a single system was present, which was induced approximately two-fold by growth at low aw. Potassium transport was assayed in vitro using cells depleted of potassium; the assay was competitively inhibited by Na+ and by the other monovalent cations NH4+, Li+, and Cs+. There was a strong correlation between the ability of strains of Lc. lactis subsp. lactis and subsp. cremoris to grow at low aw and their ability to accumulate the compatible solute glycine betaine. The Lc. lactis subsp. cremoris strains incapable of growth at NaCI concentrations> 2% w/v did not accumulate glycine betaine when growing at low aw, whereas strains capable of growth at NaCI concentrations up to 4% w/v did. A mutant, extremely sensitive to low aw was isolated from the parent strain Lc. lactis subsp. cremoris MG 1363, a plasmid free derivative of NCDO 712. The parent strain tolerated up to 4% w/v NaCI and actively accumulated glycine betaine when challenged at low aw. The mutant had lost the ability to accumulate glycine betaine and was incapable of growth at NaCI concentrations >2% w/v or the equivalent concentration of glucose. As no other compatible solute seemed capable of substitution for glycine betaine, the data suggest that the traditional; phenotypic speciation of strains on the basis of tolerance to 4% w/v NaCI can be explained as possession or lack of a glycine betaine transport system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Soft X-ray transients (SXTs) are a subgroup of low-mass X-ray binaries consisting of a neutron star or a black hole and a companion low-mass star. SXTs exhibit a sudden outburst by increasing the luminosity from ∼ 1033 to ∼ 1036−38ergs1. After spending a few months in outburst, SXTs switch back to quiescence. Optical study of the binary system during the quiescence state of SXTs provides an opportunity to discriminate between BH binaries and neutron star binaries. The first part ot this research is composed of result of 10 years joint project between Hubble space telescope and Chandra, to study SXTs in M31. The other part of this thesis focused on the light curve of bright SXTs in M31. Disc irradiation is thought to be capable of explaining the global behaviour of the light curves of SXTs. Depending on the strength of the central X-ray emission in irradiating the disc, the light curve may exhibit an exponential or a linear decay. The model predicts that in brighter transients a transition from exponential decline to a linear one may be detectable. In this study, having excluded super-soft sources and hard X-ray transients, a sample of bright SXTs in M31 (Lpeak > 1038ergs1) has been studied. The expected change in the shape of the decay function is only observed in two of the light curves from the six light curves in the sample. Also, a systematic correlation between the shape of the light curve and the X-ray luminosity has not been seen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The landscape of late medieval Ireland, like most places in Europe, was characterized by intensified agricultural exploitation, the growth and founding of towns and cities and the construction of large stone edifices, such as castles and monasteries. None of these could have taken place without iron. Axes were needed for clearing woodland, ploughs for turning the soil, saws for wooden buildings and hammers and chisels for the stone ones, all of which could not realistically have been made from any other material. The many battles, waged with ever increasingly sophisticated weaponry, needed a steady supply of iron and steel. During the same period, the European iron industry itself underwent its most fundamental transformation since its inception; at the beginning of the period it was almost exclusively based on small furnaces producing solid blooms and by the turn of the seventeenth century it was largely based on liquid-iron production in blast-furnaces the size of a house. One of the great advantages of studying the archaeology of ironworking is that its main residue, slag, is often produced in copious amounts both during smelting and smithing, is virtually indestructible and has very little secondary use. This means that most sites where ironworking was carried out are readily recognizable as such by the occurrence of this slag. Moreover, visual examination can distinguish between various types of slag, which are often characteristic for the activity from which they derive. The ubiquity of ironworking in the period under study further means that we have large amounts of residues available for study, allowing us to distinguish patterns both inside assemblages and between sites. Disadvantages of the nature of the remains related to ironworking include the poor preservation of the installations used, especially the furnaces, which were often built out of clay and located above ground. Added to this are the many parameters contributing to the formation of the above-mentioned slag, making its composition difficult to connect to a certain technology or activity. Ironworking technology in late medieval Ireland has thus far not been studied in detail. Much of the archaeological literature on the subject is still tainted by the erroneous attribution of the main type of slag, bun-shaped cakes, to smelting activities. The large-scale infrastructure works of the first decade of the twenty-first century have led to an exponential increase in the amount of sites available for study. At the same time, much of the material related to metalworking recovered during these boom-years was subjected to specialist analysis. This has led to a near-complete overhaul of our knowledge of early ironworking in Ireland. Although many of these new insights are quickly seeping into the general literature, no concise overviews on the current understanding of the early Irish ironworking technology have been published to date. The above then presented a unique opportunity to apply these new insights to the extensive body of archaeological data we now possess. The resulting archaeological information was supplemented with, and compared to, that contained in the historical sources relating to Ireland for the same period. This added insights into aspects of the industry often difficult to grasp solely through the archaeological sources, such as the people involved and the trade in iron. Additionally, overviews on several other topics, such as a new distribution map of Irish iron ores and a first analysis of the information on iron smelting and smithing in late medieval western Europe, were compiled to allow this new knowledge on late medieval Irish ironworking to be put into a wider context. Contrary to current views, it appears that it is not smelting technology which differentiates Irish ironworking from the rest of Europe in the late medieval period, but its smithing technology and organisation. The Irish iron-smelting furnaces are generally of the slag-tapping variety, like their other European counterparts. Smithing, on the other hand, is carried out at ground-level until at least the sixteenth century in Ireland, whereas waist-level hearths become the norm further afield from the fourteenth century onwards. Ceramic tuyeres continue to be used as bellows protectors, whereas these are unknown elsewhere on the continent. Moreover, the lack of market centres at different times in late medieval Ireland, led to the appearance of isolated rural forges, a type of site unencountered in other European countries during that period. When these market centres are present, they appear to be the settings where bloom smithing is carried out. In summary, the research below not only offered us the opportunity to give late medieval ironworking the place it deserves in the broader knowledge of Ireland's past, but it also provided both a base for future research within the discipline, as well as a research model applicable to different time periods, geographical areas and, perhaps, different industries..

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The wonder of the last century has been the rapid development in technology. One of the sectors that it has touched immensely is the electronic industry. There has been exponential development in the field and scientists are pushing new horizons. There is an increased dependence in technology for every individual from different strata in the society. Atomic Layer Deposition (ALD) is a unique technique for growing thin films. It is widely used in the semiconductor industry. Films as thin as few nanometers can be deposited using this technique. Although this process has been explored for a variety of oxides, sulphides and nitrides, a proper method for deposition of many metals is missing. Metals are often used in the semiconductor industry and hence are of significant importance. A deficiency in understanding the basic chemistry at the nanoscale for possible reactions has delayed the improvement in metal ALD. In this thesis, we study the intrinsic chemistry involved for Cu ALD. This work reports computational study using Density Functional Theory as implemented in TURBOMOLE program. Both the gas phase and surface reactions are studied in most of the cases. The merits and demerits of a promising transmetallation reaction have been evaluated at the beginning of the study. Further improvements in the structure of precursors and coreagent have been proposed. This has led to the proposal of metallocenes as co-reagents and Cu(I) carbene compounds as new set of precursors. A three step process for Cu ALD that generates ligand free Cu layer after every ALD pulse has also been studied. Although the chemistry has been studied under the umbrella of Cu ALD the basic principles hold true for ALD of other metals (e.g. Co, Ni, Fe ) and also for other branches of science like thin film deposition other than ALD, electrochemical reactions, etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electron microscopy (EM) has advanced in an exponential way since the first transmission electron microscope (TEM) was built in the 1930’s. The urge to ‘see’ things is an essential part of human nature (talk of ‘seeing is believing’) and apart from scanning tunnel microscopes which give information about the surface, EM is the only imaging technology capable of really visualising atomic structures in depth down to single atoms. With the development of nanotechnology the demand to image and analyse small things has become even greater and electron microscopes have found their way from highly delicate and sophisticated research grade instruments to key-turn and even bench-top instruments for everyday use in every materials research lab on the planet. The semiconductor industry is as dependent on the use of EM as life sciences and pharmaceutical industry. With this generalisation of use for imaging, the need to deploy advanced uses of EM has become more and more apparent. The combination of several coinciding beams (electron, ion and even light) to create DualBeam or TripleBeam instruments for instance enhances the usefulness from pure imaging to manipulating on the nanoscale. And when it comes to the analytic power of EM with the many ways the highly energetic electrons and ions interact with the matter in the specimen there is a plethora of niches which evolved during the last two decades, specialising in every kind of analysis that can be thought of and combined with EM. In the course of this study the emphasis was placed on the application of these advanced analytical EM techniques in the context of multiscale and multimodal microscopy – multiscale meaning across length scales from micrometres or larger to nanometres, multimodal meaning numerous techniques applied to the same sample volume in a correlative manner. In order to demonstrate the breadth and potential of the multiscale and multimodal concept an integration of it was attempted in two areas: I) Biocompatible materials using polycrystalline stainless steel and II) Semiconductors using thin multiferroic films. I) The motivation to use stainless steel (316L medical grade) comes from the potential modulation of endothelial cell growth which can have a big impact on the improvement of cardio-vascular stents – which are mainly made of 316L – through nano-texturing of the stent surface by focused ion beam (FIB) lithography. Patterning with FIB has never been reported before in connection with stents and cell growth and in order to gain a better understanding of the beam-substrate interaction during patterning a correlative microscopy approach was used to illuminate the patterning process from many possible angles. Electron backscattering diffraction (EBSD) was used to analyse the crystallographic structure, FIB was used for the patterning and simultaneously visualising the crystal structure as part of the monitoring process, scanning electron microscopy (SEM) and atomic force microscopy (AFM) were employed to analyse the topography and the final step being 3D visualisation through serial FIB/SEM sectioning. II) The motivation for the use of thin multiferroic films stems from the ever-growing demand for increased data storage at lesser and lesser energy consumption. The Aurivillius phase material used in this study has a high potential in this area. Yet it is necessary to show clearly that the film is really multiferroic and no second phase inclusions are present even at very low concentrations – ~0.1vol% could already be problematic. Thus, in this study a technique was developed to analyse ultra-low density inclusions in thin multiferroic films down to concentrations of 0.01%. The goal achieved was a complete structural and compositional analysis of the films which required identification of second phase inclusions (through elemental analysis EDX(Energy Dispersive X-ray)), localise them (employing 72 hour EDX mapping in the SEM), isolate them for the TEM (using FIB) and give an upper confidence limit of 99.5% to the influence of the inclusions on the magnetic behaviour of the main phase (statistical analysis).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The past few years have witnessed an exponential increase in studies trying to identify molecular markers in patients with breast tumours that might predict for the success or failure of hormonal therapy or chemotherapy. HER2, a tyrosine kinase membrane receptor of the epidermal growth factor receptor family, has been the most widely studied marker in this respect. This paper attempts to critically review to what extent HER2 may improve 'treatment individualisation' for the breast cancer patient. Copyright (C) 2000.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aging African-American women are disproportionately affected by negative health outcomes and mortality. Life stress has strong associations with these health outcomes. The purpose of this research was to understand how aging African American women manage stress. Specifically, the effects of coping, optimism, resilience, and religiousness as it relates to quality of life were examined. This cross-sectional exploratory study used a self-administered questionnaire and examined quality of life in 182 African-American women who were 65 years of age or older living in senior residential centers in Baltimore using convenience sampling. The age range for these women was 65 to 94 years with a mean of 71.8 years (SD = 5.6). The majority (53.1%) of participants completed high school, with 23 percent (N = 42) obtaining college degrees and 19 percent (N = 35) holding advanced degrees. Nearly 58 percent of participants were widowed and 81 percent were retired. In addition to demographics, the questionnaire included the following reliable and valid survey instruments: The Brief Cope Scale (Carver, Scheier, & Weintraub, 1989), Optimism Questionnaire (Scheier, Carver, & Bridges, 1994), Resilience Survey (Wagnild & Young, 1987), Religiousness Assessment (Koenig, 1997), and Quality of Life Questionnaire (Cummins, 1996). Results revealed that the positive psychological factors examined were positively associated with and significant predictors of quality of life. The bivariate correlations indicated that of the six coping dimensions measured in this study, planning (r=.68) was the most positively associated with quality of life. Optimism (r=.33), resilience (=.48), and religiousness (r=.30) were also significantly correlated with quality of life. In the linear regression model, again the coping dimension of planning was the best predictor of quality of life (beta = .75, p <.001). Optimism (beta = .31, p <.001), resilience (beta = .34, p, .001) and religiousness (beta = .17, p <.01) were also significant predictors of quality of life. It appears as if positive psychology plays an important role in improving quality of life among aging African-American women.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset. © 1963-2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Childhood sexual abuse is prevalent among people living with HIV, and the experience of shame is a common consequence of childhood sexual abuse and HIV infection. This study examined the role of shame in health-related quality of life among HIV-positive adults who have experienced childhood sexual abuse. Data from 247 HIV-infected adults with a history of childhood sexual abuse were analyzed. Hierarchical linear regression was conducted to assess the impact of shame regarding both sexual abuse and HIV infection, while controlling for demographic, clinical, and psychosocial factors. In bivariate analyses, shame regarding sexual abuse and HIV infection were each negatively associated with health-related quality of life and its components (physical well-being, function and global well-being, emotional and social well-being, and cognitive functioning). After controlling for demographic, clinical, and psychosocial factors, HIV-related, but not sexual abuse-related, shame remained a significant predictor of reduced health-related quality of life, explaining up to 10% of the variance in multivariable models for overall health-related quality of life, emotional, function and global, and social well-being and cognitive functioning over and above that of other variables entered into the model. Additionally, HIV symptoms, perceived stress, and perceived availability of social support were associated with health-related quality of life in multivariable models. Shame is an important and modifiable predictor of health-related quality of life in HIV-positive populations, and medical and mental health providers serving HIV-infected populations should be aware of the importance of shame and its impact on the well-being of their patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION: Neurodegenerative diseases (NDD) are characterized by progressive decline and loss of function, requiring considerable third-party care. NDD carers report low quality of life and high caregiver burden. Despite this, little information is available about the unmet needs of NDD caregivers. METHODS: Data from a cross-sectional, whole of population study conducted in South Australia were analyzed to determine the profile and unmet care needs of people who identify as having provided care for a person who died an expected death from NDDs including motor neurone disease and multiple sclerosis. Bivariate analyses using chi(2) were complemented with a regression analysis. RESULTS: Two hundred and thirty respondents had a person close to them die from an NDD in the 5 years before responding. NDD caregivers were more likely to have provided care for more than 2 years and were more able to move on after the death than caregivers of people with other disorders such as cancer. The NDD caregivers accessed palliative care services at the same rate as other caregivers at the end of life, however people with an NDD were almost twice as likely to die in the community (odds ratio [OR] 1.97; 95% confidence interval [CI] 1.30 to 3.01) controlling for relevant caregiver factors. NDD caregivers reported significantly more unmet needs in emotional, spiritual, and bereavement support. CONCLUSION: This study is the first step in better understanding across the whole population the consequences of an expected death from an NDD. Assessments need to occur while in the role of caregiver and in the subsequent bereavement phase.