915 resultados para General - statistics and numerical data
Resumo:
Progressive supranuclear palsy (PSP) is a rare, degenerative disorder of the brain believed to affect between 1.39 and 6.6 individuals per 100,000 of the population. The disorder is likely to be more common than suggested by these data due to difficulties in diagnosis and especially in distinguishing PSP from other conditions with similar symptoms such as multiple system atrophy (MSA), corticobasal degeneration (CBD), and Parkinson’s disease (PD). PSP was first described in 1964 by Steele, Richardson and Olszewski and originally called Steele-Richardson-Olszewski syndrome. The disorder is the second commonest syndrome in which the patient exhibits ‘parkinsonism’, viz., a range of problems involving movement most typically manifest in PD itself but also seen in PSP, MSA and CBD. Although primarily a brain disorder, patients with PSP exhibit a range of visual clinical signs and symptoms that may be useful in differential diagnosis. Hence, the present article describes the general clinical and pathological features of PSP, its specific visual signs and symptoms, discusses the usefulness of these signs in differential diagnosis, and considers the various treatment options.
Resumo:
Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.
Resumo:
The software architecture and development consideration for open metadata extraction and processing framework are outlined. Special attention is paid to the aspects of reliability and fault tolerance. Grid infrastructure is shown as useful backend for general-purpose task.
Resumo:
Since wind at the earth's surface has an intrinsically complex and stochastic nature, accurate wind power forecasts are necessary for the safe and economic use of wind energy. In this paper, we investigated a combination of numeric and probabilistic models: a Gaussian process (GP) combined with a numerical weather prediction (NWP) model was applied to wind-power forecasting up to one day ahead. First, the wind-speed data from NWP was corrected by a GP, then, as there is always a defined limit on power generated in a wind turbine due to the turbine controlling strategy, wind power forecasts were realized by modeling the relationship between the corrected wind speed and power output using a censored GP. To validate the proposed approach, three real-world datasets were used for model training and testing. The empirical results were compared with several classical wind forecast models, and based on the mean absolute error (MAE), the proposed model provides around 9% to 14% improvement in forecasting accuracy compared to an artificial neural network (ANN) model, and nearly 17% improvement on a third dataset which is from a newly-built wind farm for which there is a limited amount of training data. © 2013 IEEE.
Resumo:
With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
Over 150 million cubic meter of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach sized-sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.
Resumo:
The recently proposed global monsoon hypothesis interprets monsoon systems as part of one global-scale atmospheric overturning circulation, implying a connection between the regional monsoon systems and an in-phase behaviour of all northern hemispheric monsoons on annual timescales (Trenberth et al., 2000). Whether this concept can be applied to past climates and variability on longer timescales is still under debate, because the monsoon systems exhibit different regional characteristics such as different seasonality (i.e. onset, peak, and withdrawal). To investigate the interconnection of different monsoon systems during the pre-industrial Holocene, five transient global climate model simulations have been analysed with respect to the rainfall trend and variability in different sub-domains of the Afro-Asian monsoon region. Our analysis suggests that on millennial timescales with varying orbital forcing, the monsoons do not behave as a tightly connected global system. According to the models, the Indian and North African monsoons are coupled, showing similar rainfall trend and moderate correlation in rainfall variability in all models. The East Asian monsoon changes independently during the Holocene. The dissimilarities in the seasonality of the monsoon sub-systems lead to a stronger response of the North African and Indian monsoon systems to the Holocene insolation forcing than of the East Asian monsoon and affect the seasonal distribution of Holocene rainfall variations. Within the Indian and North African monsoon domain, precipitation solely changes during the summer months, showing a decreasing Holocene precipitation trend. In the East Asian monsoon region, the precipitation signal is determined by an increasing precipitation trend during spring and a decreasing precipitation change during summer, partly balancing each other. A synthesis of reconstructions and the model results do not reveal an impact of the different seasonality on the timing of the Holocene rainfall optimum in the different sub-monsoon systems. They rather indicate locally inhomogeneous rainfall changes and show, that single palaeo-records should not be used to characterise the rainfall change and monsoon evolution for entire monsoon sub-systems.
Resumo:
Economic policy-making has long been more integrated than social policy-making in part because the statistics and much of the analysis that supports economic policy are based on a common conceptual framework – the system of national accounts. People interested in economic analysis and economic policy share a common language of communication, one that includes both concepts and numbers. This paper examines early attempts to develop a system of social statistics that would mirror the system of national accounts, particular the work on the development of social accounts that took place mainly in the 60s and 70s. It explores the reasons why these early initiatives failed but argues that the preconditions now exist to develop a new conceptual framework to support integrated social statistics – and hence a more coherent, effective social policy. Optimism is warranted for two reasons. First, we can make use of the radical transformation that has taken place in information technology both in processing data and in providing wide access to the knowledge that can flow from the data. Second, the conditions exist to begin to shift away from the straight jacket of government-centric social statistics, with its implicit assumption that governments must be the primary actors in finding solutions to social problems. By supporting the decision-making of all the players (particularly individual citizens) who affect social trends and outcomes, we can start to move beyond the sterile, ideological discussions that have dominated much social discourse in the past and begin to build social systems and structures that evolve, almost automatically, based on empirical evidence of ‘what works best for whom’. The paper describes a Canadian approach to developing a framework, or common language, to support the evolution of an integrated, citizen-centric system of social statistics and social analysis. This language supports the traditional social policy that we have today; nothing is lost. However, it also supports a quite different social policy world, one where individual citizens and families (not governments) are seen as the central players – a more empirically-driven world that we have referred to as the ‘enabling society’.
Resumo:
This article draws attention to the importance of routinely collected administrative data as an important source for understanding the characteristics of the Northern Ireland child welfare system as it has developed since the Children (Northern Ireland) Order 1995 became its legislative base. The article argues that the availability of such data is a strength of the Northern Ireland child welfare system and urges local politicians, lobbyists, researchers, policy-makers, operational managers, practitioners and service user groups to make more use of them. The main sources of administrative data are identified. Illustration of how these can be used to understand and to ask questions about the system is provided by considering some of the trends since the Children Order was enacted. The “protection” principle of the Children Order provides the focus for the illustration. The statistical trends considered relate to child protection referrals, investigations and registrations and to children and young people looked after under a range of court orders available to ensure their protection and well-being.
Resumo:
Objective: This qualitative study set in the West Midlands region of the United Kingdom, aimed to examine the role of the general practitioner (GP) in children's oncology palliative care from the perspective of GPs who had cared for a child with cancer receiving palliative care at home and bereaved parents. Methods: One-to-one semi-structured interviews were undertaken with 18 GPs and 11 bereaved parents following the death. A grounded theory data analysis was undertaken; identifying generated themes through chronological comparative data analysis. Results: Similarity in GP and parent viewpoints was found, the GPs role seen as one of providing medication and support. Time pressures GPs faced influenced their level of engagement with the family during palliative and bereavement care and their ability to address their identified learning deficits. Lack of familiarity with the family, coupled with an acknowledgment that it was a rare and could be a frightening experience, also influenced their level of interaction. There was no consistency in GP practice nor evidence of practice being guided by local or national policies. Parents lack of clarity of their GPs role resulted in missed opportunities for support. Conclusions: Time pressures influence GP working practices. Enhanced communication and collaboration between the GP and regional childhood cancer centre may help address identified GP challenges, such as learning deficits, and promote more time-efficient working practices through role clarity. Parents need greater awareness of their GP's wide-ranging role; one that transcends palliative care incorporating bereavement support and on-going medical care for family members
Resumo:
Purpose: Identify predictors and normative data for quality of life (QOL) in a sample of Portuguese adults from general population Methods: A cross-sectional correlational study was undertaken with two hundred and fifty-five (N=255) individuals from Portuguese general population (mean age 43yrs, range 25-84yrs; 148 females, 107 males). Participants completed the European Portuguese version of the World Health Organization Quality of Life short-form instrument (WHOQOL-Bref) and the European Portuguese version of the Center for Epidemiologic Studies Depression Scale (CES-D). Demographic information was also collected. Results: Portuguese adults reported their QOL as good. The physical, psychological and environmental domains predicted 44% of the variance of QOL. The strongest predictor was the physical domain and the weakest was social relationships. Age, educational level, socioeconomic status and emotional status were significantly correlated with QOL and explained 25% of the variance of QOL. The strongest predictor of QOL was emotional status followed by education and age. QOL was significantly different according to: marital status; living place (mainland or islands); type of cohabitants; occupation; health. Conclusions: The sample of adults from general Portuguese population reported high levels of QOL. The life domain that better explained QOL was the physical domain. Among other variables, emotional status best predicted QOL. Further variables influenced overall QOL. These findings inform our understanding on adults from Portuguese general population QOL
Resumo:
This thesis focuses on experimental and numerical studies of the hydrodynamic interaction between two vessels in close proximity in waves. In the model tests, two identical box-like models with round corners were used. Regular waves with the same wave steepness and different wave frequencies were generated. Six degrees of freedom body motions and wave elevations between bodies were measured in a head sea condition. Three initial gap widths were examined. In the numerical computations, a panel-free method based seakeeping program, MAPS0, and a panel method based program, WAMIT, were used for the prediction of body motions and wave elevations. The computed body motions and wave elevations were compared with experimental data.