994 resultados para deterministic fractals


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper applies dimensional analysis to propose an alternative model for estimating the effective density of flocs (Δρf). The model takes into account the effective density of the primary particles, in addition to the sizes of the floc and primary particles, and does not consider the concept of self-similarity. The model contains three dimensionless products and two empirical parameters (αf and βf), which were calibrated by using data available in the literature. Values of αf=0.7 and βf=0.8 were obtained. The average value of the primary particle size (Dp) for the data used in the analysis, inferred from the new model, was found to vary from 0.05 μm to 100 μm with a mean value of 2.5 μm. Good comparisons were obtained in comparing the estimated floc-settling velocity on the basis of the proposed model for effective floc density with the measured value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a summary of the evidence review group (ERG) report into the clinical and cost-effectiveness of entecavir for the treatment of chronic hepatitis B (CHB) in adults based upon a review of the manufacturer's submission to the National Institute for Health and Clinical Excellence (NICE) as part of the single technology appraisal (STA) process. The submission's evidence came from five randomised controlled trials (RCTs), of good methodological quality and measuring a range of clinically relevant outcomes, comparing entecavir with lamivudine. After 1 year of treatment entecavir was statistically superior to lamivudine in terms of the proportion of patients achieving hepatitis B virus (HBV) DNA suppression, alanine aminotransferase (ALT) normalisation and histological improvement, but not in terms of the proportion of patients achieving hepatitis B e antigen (HBeAg) seroconversion. The incidence of adverse or serious adverse events was similar for both treatments. The results of the manufacturer's mixed treatment comparison (MTC) model to compare entecavir with the comparator drugs in nucleoside-naive patients were considered to be uncertain because of concerns over its conduct and reporting. For the economic evaluation the manufacturer constructed two Markov state transition models, one in HBeAg-positive and one in HBeAg-negative patients. The modelling approach was considered reasonable subject to some uncertainties and concerns over some of the structural assumptions. In HBeAg-positive patients the base-case incremental cost-effectiveness ratios (ICER) for entecavir compared with lamivudine and pegylated interferon alpha-2a were 14,329 pounds and 8403 pounds per quality-adjusted life-year (QALY) respectively. Entecavir was dominated by telbivudine. In HBeAg-negative patients the base-case ICERs for entecavir compared with lamivudine, pegylated interferon alpha-2a and telbivudine were 13,208 pounds, 7511 pounds and 6907 pounds per QALY respectively. In HBeAg-positive lamivudine-refractory patients entecavir dominated adefovir added to lamivudine. In one-way deterministic sensitivity analysis on all key input parameters for entecavir compared with lamivudine in nucleoside-naive patients, ICERs generally remained under 30,000 pounds per QALY. In probabilistic sensitivity analysis in nucleoside-naive HBeAg-positive patients the probability of the ICER for entecavir being below 20,000 pounds per QALY was 57%, 82% and 45% compared with lamivudine, pegylated interferon alpha-2a and telbivudine respectively. In nucleoside-naive HBeAg-negative patients the probabilities were 90%, 100% and 96% respectively. The manufacturer's lifetime treatment scenario for HBeAg-negative patients and the ERG's 20-year treatment scenario for HBeAg-positive patients increased the ICERs, particularly in the latter case. Amending the HBeAg-negative model so that patients with compensated cirrhosis would also receive lifetime treatment gave probabilities of entecavir being cost-effective at a willingness to pay of 20,000 pounds and 30,000 pounds of 4% and 40% respectively. The NICE guidance issued in August 2008 as a result of the STA states that entecavir is recommended as an option for the treatment of people with chronic HBeAg-positive or HBeAg-negative hepatitis B in whom antiviral treatment is indicated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social scientists and Indigenous people have voiced concerns that media messages about genetics and race may increase the public's belief in genetic determinism and even increase levels of racism. The degree of genetic determinism in media messages has been examined as a determining factor. This study is the first to consider the implications of this area of scholarship for the indigenous minority in Australia. A search of the last two decades of major Australian newspapers was undertaken for articles that discussed Indigenous Australians and genetics. The review found 212 articles, of which 58 concerned traits or conditions that were presented in a genetically deterministic or antideterministic fashion. These 58 articles were analysed by topic, slant, and time period. Overall, 23 articles were anti-deterministic, 18 were deterministic, 14 presented both sides and three were ambiguous. There was a spike in anti-deterministic articles in the years after the Human Genome Diversity Project, and a parallel increase in deterministic articles since the completion of the Human Genome Project in 2000. Potential implications of the nature of media coverage of genetics for Indigenous Australians is discussed. Further research is required to test directly the impact of these messages on Australians.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stochastic search techniques such as evolutionary algorithms (EA) are known to be better explorer of search space as compared to conventional techniques including deterministic methods. However, in the era of big data like most other search methods and learning algorithms, suitability of evolutionary algorithms is naturally questioned. Big data pose new computational challenges including very high dimensionality and sparseness of data. Evolutionary algorithms' superior exploration skills should make them promising candidates for handling optimization problems involving big data. High dimensional problems introduce added complexity to the search space. However, EAs need to be enhanced to ensure that majority of the potential winner solutions gets the chance to survive and mature. In this paper we present an evolutionary algorithm with enhanced ability to deal with the problems of high dimensionality and sparseness of data. In addition to an informed exploration of the solution space, this technique balances exploration and exploitation using a hierarchical multi-population approach. The proposed model uses informed genetic operators to introduce diversity by expanding the scope of search process at the expense of redundant less promising members of the population. Next phase of the algorithm attempts to deal with the problem of high dimensionality by ensuring broader and more exhaustive search and preventing premature death of potential solutions. To achieve this, in addition to the above exploration controlling mechanism, a multi-tier hierarchical architecture is employed, where, in separate layers, the less fit isolated individuals evolve in dynamic sub-populations that coexist alongside the original or main population. Evaluation of the proposed technique on well known benchmark problems ascertains its superior performance. The algorithm has also been successfully applied to a real world problem of financial portfolio management. Although the proposed method cannot be considered big data-ready, it is certainly a move in the right direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the early 2000s, Information Systems researchers in Australia had begun to emphasise socio-technical approaches in innovation adoption of technologies. The ‘essentialist' approaches to adoption (for example, Innovation Diffusion or TAM), suggest an essence is largely responsible for rate of adoption (Tatnall, 2011) or a new technology introduced may spark innovation. The socio-technical factors in implementing an innovation are largely flouted by researchers and hospitals. Innovation Translation is an approach that purports that any innovation needs to be customised and translated in to context before it can be adopted. Equally, Actor-Network Theory (ANT) is an approach that embraces the differences in technical and human factors and socio-professional aspects in a non-deterministic manner. The research reported in this paper is an attempt to combined the two approaches in an effective manner, to visualise the socio-technical factors in RFID technology adoption in an Australian hospital. This research investigation demonstrates RFID technology translation in an Australian hospital using a case approach (Yin, 2009). Data was collected using a process of focus groups and interviews, analysed with document analysis and concept mapping techniques. The data was then reconstructed in a ‘movie script' format, with Acts and Scenes funnelled to ANT informed abstraction at the end of each Act. The information visualisation at the end of each Act using ANT informed Lens reveal the re-negotiation and improvement of network relationships between the people (factors) involved including nurses, patient care orderlies, management staff and non-human participants such as equipment and technology. The paper augments the current gaps in literature regarding socio-technical approaches in technology adoption within Australian healthcare context, which is transitioning from non-integrated nearly technophobic hospitals in the last decade to a tech-savvy integrated era. More importantly, the ANT visualisation addresses one of the criticisms of ANT i.e. its insufficiency to explain relationship formations between participants and over changes of events in relationship networks (Greenhalgh & Stones, 2010).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The penetration of intermittent renewable energy sources (IRESs) into power grids has increased in the last decade. Integration of wind farms and solar systems as the major IRESs have significantly boosted the level of uncertainty in operation of power systems. This paper proposes a comprehensive computational framework for quantification and integration of uncertainties in distributed power systems (DPSs) with IRESs. Different sources of uncertainties in DPSs such as electrical load, wind and solar power forecasts and generator outages are covered by the proposed framework. Load forecast uncertainty is assumed to follow a normal distribution. Wind and solar forecast are implemented by a list of prediction intervals (PIs) ranging from 5% to 95%. Their uncertainties are further represented as scenarios using a scenario generation method. Generator outage uncertainty is modeled as discrete scenarios. The integrated uncertainties are further incorporated into a stochastic security-constrained unit commitment (SCUC) problem and a heuristic genetic algorithm is utilized to solve this stochastic SCUC problem. To demonstrate the effectiveness of the proposed method, five deterministic and four stochastic case studies are implemented. Generation costs as well as different reserve strategies are discussed from the perspectives of system economics and reliability. Comparative results indicate that the planned generation costs and reserves are different from the realized ones. The stochastic models show better robustness than deterministic ones. Power systems run a higher level of risk during peak load hours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the properties of panel unit root tests based on recursively detrended data. The analysis is conducted while allowing for a (potentially) non-linear trend function, which represents a more general consideration than the current state of affairs with (at most) a linear trend. A new test statistic is proposed whose asymptotic behavior under the unit root null hypothesis, and the simplifying assumptions of a polynomial trend and iid errors are shown to be surprisingly simple. Indeed, the test statistic is not only asymptotically independent of the true trend polynomial, but also is in fact unique in that it is independent also of the degree of the fitted polynomial. However, this invariance property does not carry over to the local alternative, under which it is shown that local power is a decreasing function of the trend degree. But while power does decrease, the rate of shrinking of the local alternative is generally constant in the trend degree, which goes against the common belief that the rate of shrinking should be decreasing in the trend degree. The above results are based on simplifying assumptions. To compensate for this lack of generality, a second, robust, test statistic is proposed, whose validity does not require that the trend function is a polynomial or that the errors are iid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops a very simple test for the null hypothesis of no cointegration in panel data. The test is general enough to allow for heteroskedastic and serially correlated errors, unit-specific time trends, cross-sectional dependence and unknown structural breaks in both the intercept and slope of the cointegrated regression, which may be located at different dates for different units. The limiting distribution of the test is derived, and is found to be normal and free of nuisance parameters under the null. A small simulation study is also conducted to investigate the small-sample properties of the test. In our empirical application, we provide new evidence concerning the purchasing power parity hypothesis. © Blackwell Publishing Ltd and the Department of Economics, University of Oxford, 2008.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article proposes Lagrange multiplier-based tests for the null hypothesis of no cointegration. The tests are general enough to allow for heteroskedastic and serially correlated errors, deterministic trends, and a structural break of unknown timing in both the intercept and slope. The limiting distributions of the test statistics are derived, and are found to be invariant not only with respect to the trend and structural break, but also with respect to the regressors. A small Monte Carlo study is also conducted to investigate the small-sample properties of the tests. The results reveal that the tests have small size distortions and good power relative to other tests. © 2007 The Authors Journal compilation 2007 Blackwell Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract
In this article, an exponential stability analysis of Markovian jumping stochastic bidirectional associative memory (BAM) neural networks with mode-dependent probabilistic time-varying delays and impulsive control is investigated. By establishment of a stochastic variable with Bernoulli distribution, the information of probabilistic time-varying delay is considered and transformed into one with deterministic time-varying delay and stochastic parameters. By fully taking the inherent characteristic of such kind of stochastic BAM neural networks into account, a novel Lyapunov-Krasovskii functional is constructed with as many as possible positive definite matrices which depends on the system mode and a triple-integral term is introduced for deriving the delay-dependent stability conditions. Furthermore, mode-dependent mean square exponential stability criteria are derived by constructing a new Lyapunov-Krasovskii functional with modes in the integral terms and using some stochastic analysis techniques. The criteria are formulated in terms of a set of linear matrix inequalities, which can be checked efficiently by use of some standard numerical packages. Finally, numerical examples and its simulations are given to demonstrate the usefulness and effectiveness of the proposed results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, deduplication system improves storage utilization while reducing reliability. Furthermore, the challenge of privacy for sensitive data also arises when they are outsourced by users to cloud. Aiming to address the above security challenges, this paper makes the first attempt to formalize the notion of distributed reliable deduplication system. We propose new distributed deduplication systems with higher reliability in which the data chunks are distributed across multiple cloud servers. The security requirements of data confidentiality and tag consistency are also achieved by introducing a deterministic secret sharing scheme in distributed storage systems, instead of using convergent encryption as in previous deduplication systems. Security analysis demonstrates that our deduplication systems are secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement the proposed systems and demonstrate that the incurred overhead is very limited in realistic environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The uncertainties of renewable energy have brought great challenges to power system commitment, dispatches and reserve requirement. This paper presents a comparative study on integration of renewable generation uncertainties into SCUC (stochastic security-constrained unit commitment) considering reserve and risk. Renewable forecast uncertainties are captured by a list of PIs (prediction intervals). A new scenario generation method is proposed to generate scenarios from these PIs. Different system uncertainties are considered as scenarios in the stochastic SCUC problem formulation. Two comparative simulations with single (E1: wind only) and multiple sources of uncertainty (E2: load, wind, solar and generation outages) are investigated. Five deterministic and four stochastic case studies are performed. Different generation costs, reserve strategies and associated risks are compared under various scenarios. Demonstrated results indicate the overall costs of E2 is lower than E1 due to penetration of solar power and the associated risk in deterministic cases of E2 is higher than E1. It implies the superimposed effect of uncertainties during uncertainty integration. The results also demonstrate that power systems run a higher level of risk during peak load hours, and that stochastic models are more robust than deterministic ones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O tema central deste trabalho é o Planejamento, Programação e Controle da Produção na indústria, com o auxílio de uma ferramenta computacional, do tipo Finite Capacity Schedule (FCS). No Brasil, essa categoria de software é denominada, genericamente, por Sistemas de Planejamento Fino de Produção ou de Capacidade Finita. Alinhado com as tendências mundiais e a vantagem de menores investimentos em hardware, o sistema escolhido é compatível com a operação em microcomputadores. Na primeira parte do trabalho, o assunto é tratado de forma geral, quando se pretende caraterizar amplamente o problema da programação da produção, as dificuldades na sua execução, as soluções existentes e suas limitações. A segunda parte do trabalho discute, detalhadamente, os métodos tradicionais de planejamento de materiais e capacidade. A revisão bibliográfica se encerra com uma apresentação dos sistemas FCS e sua classificação. A terceira parte trata da descrição, ensaios e avaliação da programação gerada por um software de Planejamento Fino de Produção determinístico, baseado na lógica de simulação computacional com regras de decisão. Embora a avaliação esteja limitada ao software utilizado, a análise ainda vai procurar identificar as diferenças fundamentais entre os resultados da programação de Capacidade Finita e a convencional, representada pelos sistemas da categoria MRPII ou Planejamento dos Recursos de Manufatura (Manufacturing Resources Planning). As lógicas dos sistemas MRPII e de Capacidade Finita são discutidas na revisão bibliográfica, enquanto que, para o software empregado no trabalho, ainda há um capítulo específico tratando da sua descrição, fundamentos, software house, hardware necessário e outras informações relevantes. Os ensaios serão implementados com o objetivo de analisar o sistema FCS como ferramenta de planejamento e de programação de produção. No caso, uma fração de um processo produtivo será modelada no sistema, através do qual serão gerados planos de produção que serão confrontados com a programação usual e com o comportamento real dos recursos envolvidos. Os ensaios serão realizados numa das unidades pertencentes a uma empresa transnacional de grande porte, que atua no ramo de pneumáticos. Por último, são apresentadas as conclusões gerais, recomendações na aplicação do sistema estudado e sugestões para futuras pesquisas relacionadas com o assunto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The presence of deterministic or stochastic trend in U.S. GDP has been a continuing debate in the literature of macroeconomics. Ben-David and Papell (1995) found evindence in favor of trend stationarity using the secular sample of Maddison (1995). More recently, Murray and Nelson (2000) correctly criticized this nding arguing that the Maddison data are plagued with additive outliers (AO), which bias inference towards stationarity. Hence, they propose to set the secular sample aside and conduct inference using a more homogeneous but shorter time-span post-WWII sample. In this paper we re-visit the Maddison data by employing a test that is robust against AO s. Our results suggest the U.S. GDP can be modeled as a trend stationary process.