907 resultados para General Linear Methods
Resumo:
Background: Sodium hypochlorite is used commonly as an endodontic irrigant, but there are no published reports that provide details of its use. This survey sought to determine the percentage of Australian dentists who practiced endodontics, whether they used sodium hypochlorite for irrigation, and the manner of dilution, storage and dispensing sodium hypochlorite used by both dentists and endodontists. Methods: All Australian endodontists and a stratified random sample of 200 general dentists in Australia were surveyed to address the issues identified above. Results: Almost 98 per cent of dentists surveyed performed endodontic treatment. Among endodontists, nearly 94 per cent used sodium hypochlorite for irrigation compared with just under 75 per cent of general dentists: Sodium hypochlorite use by general dentists was more common in Victoria and South Australia than in other States. An infant sanitizer (Milton or Johnson's Antibacterial Solution) was used by just over 92 per cent of general practitioners and by more than 67 per cent of endodontists. All other respondents used domestic bleach. One hundred and sixty four of the respondents (80 per cent of endodontists and over 90 per cent of general dentists) used a 1 per cent w/v solution. Ten practitioners used a 4 per cent w/v solution, five used a 2 per cent w/v solution and four used a 1.5 per cent w/v solution. Eighty per cent of the practitioners who diluted their sodium hypochlorite before use, used demineralized water for this purpose. The remainder used tap water. Only four practitioners stored sodium hypochlorite in a manner which risked light exposure and loss of available chlorine content. Conclusions: Sodium hypochlorite is commonly used as an endodontic irrigant and Australian dentists generally stored the material correctly.
Resumo:
Objectives: The aim of the present study was to determine the effect of unsupervised, long-term use of a 0.3% triclosan/2% copolymer dentifrice on the progression of periodontal disease in a general adult population. Methods: Five hundred and four volunteers were enrolled in a double-blind, controlled clinical trial. Participants were matched for disease status, plaque index, age and gender. At the baseline examination, probing pocket depths and relative attachment levels were recorded and participants were assigned to either the test or control group. Re-examinations took place after 6, 12, 24, 36, 48 and 60 months. Subgingival plaque samples were collected at each examination and assayed for Porphyromonas gingivalis , Actinobacillus actinomycetemcomitans and Prevotella intermedia . A generalised linear model was used to analyse the data, with a number of covariates thought to influence the responses included as the possible confounding effects. Results: The triclosan/copolymer dentifrice had a significant effect in subjects with interproximal probing depths greater than or equal to3.5 mm, where it significantly reduced the number of sites with probing depths greater than or equal to3.5 mm at the following examination, when compared with the control group (p
Resumo:
Admission controls, such as trunk reservation, are often used in loss networks to optimise their performance. Since the numerical evaluation of performance measures is complex, much attention has been given to finding approximation methods. The Erlang Fixed-Point (EFP) approximation, which is based on an independent blocking assumption, has been used for networks both with and without controls. Several more elaborate approximation methods which account for dependencies in blocking behaviour have been developed for the uncontrolled setting. This paper is an exploratory investigation of extensions and synthesis of these methods to systems with controls, in particular, trunk reservation. In order to isolate the dependency factor, we restrict our attention to a highly linear network. We will compare the performance of the resulting approximations against the benchmark of the EFP approximation extended to the trunk reservation setting. By doing this, we seek to gain insight into the critical factors in constructing an effective approximation. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
In the energy management of the isolated operation of small power system, the economic scheduling of the generation units is a crucial problem. Applying right timing can maximize the performance of the supply. The optimal operation of a wind turbine, a solar unit, a fuel cell and a storage battery is searched by a mixed-integer linear programming implemented in General Algebraic Modeling Systems (GAMS). A Virtual Power Producer (VPP) can optimal operate the generation units, assured the good functioning of equipment, including the maintenance, operation cost and the generation measurement and control. A central control at system allows a VPP to manage the optimal generation and their load control. The application of methodology to a real case study in Budapest Tech, demonstrates the effectiveness of this method to solve the optimal isolated dispatch of the DC micro-grid renewable energy park. The problem has been converged in 0.09 s and 30 iterations.
Resumo:
This paper proposes two meta-heuristics (Genetic Algorithm and Evolutionary Particle Swarm Optimization) for solving a 15 bid-based case of Ancillary Services Dispatch in an Electricity Market. A Linear Programming approach is also included for comparison purposes. A test case based on the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is used to demonstrate that the use of meta-heuristics is suitable for solving this kind of optimization problem. Faster execution times and lower computational resources requirements are the most relevant advantages of the used meta-heuristics when compared with the Linear Programming approach.
Resumo:
Electricity market players operating in a liberalized environment requires access to an adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tools must include ancillary market simulation. This paper proposes two different methods (Linear Programming and Genetic Algorithm approaches) for ancillary services dispatch. The methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case concerning the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is included in this paper.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
Background: Paranoid ideation has been regarded as a cognitive and a social process used as a defence against perceived threats. According to this perspective, paranoid ideation can be understood as a process extending across the normal-pathological continuum. Methods: In order to refine the construct of paranoid ideation and to validate a measure of paranoia, 906 Portuguese participants from the general population and 91 patients were administered the General Paranoia Scale (GPS), and two conceptual models (one - and tridimensional) were compared through confirmatory factor analysis (CFA). Results: Results from the CFA of the GPS confirmed a different model than the one-dimensional model proposed by Fenigstein and Vanable, which com-prised three dimensions (mistrust thoughts, persecutory ideas, and self-deprecation). This alternative model presented a better fit and increased sensitivity when compared with the one-dimensional model. Further data analysis of the scale revealed that the GPS is an adequate assessment tool for adults, with good psychometric characteristics and high internal consistency. Conclusion: The model proposed in the current work leads to further refinements and enrichment of the construct of paranoia in different populations, allowing the assessment of three dimensions of paranoia and the risk of clinical paranoia in a single measure for the general population.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica
Resumo:
Microarray allow to monitoring simultaneously thousands of genes, where the abundance of the transcripts under a same experimental condition at the same time can be quantified. Among various available array technologies, double channel cDNA microarray experiments have arisen in numerous technical protocols associated to genomic studies, which is the focus of this work. Microarray experiments involve many steps and each one can affect the quality of raw data. Background correction and normalization are preprocessing techniques to clean and correct the raw data when undesirable fluctuations arise from technical factors. Several recent studies showed that there is no preprocessing strategy that outperforms others in all circumstances and thus it seems difficult to provide general recommendations. In this work, it is proposed to use exploratory techniques to visualize the effects of preprocessing methods on statistical analysis of cancer two-channel microarray data sets, where the cancer types (classes) are known. For selecting differential expressed genes the arrow plot was used and the graph of profiles resultant from the correspondence analysis for visualizing the results. It was used 6 background methods and 6 normalization methods, performing 36 pre-processing methods and it was analyzed in a published cDNA microarray database (Liver) available at http://genome-www5.stanford.edu/ which microarrays were already classified by cancer type. All statistical analyses were performed using the R statistical software.
Resumo:
OBJECTIVE: To examine the association between tooth loss and general and central obesity among adults. METHODS: Population-based cross-sectional study with 1,720 adults aged 20 to 59 years from Florianópolis, Southern Brazil. Home interviews were performed and anthropometric measures were taken. Information on sociodemographic data, self-reported diabetes, self-reported number of teeth, central obesity (waist circumference [WC] > 88 cm in women and > 102 cm in men) and general obesity (body mass index [BMI] ≥ 30 kg/m²) was collected. We used multivariable Poisson regression models to assess the association between general and central obesity and tooth loss after controlling for confounders. We also performed simple and multiple linear regressions by using BMI and WC as continuous variables. Interaction between age and tooth loss was also assessed. RESULTS: The mean BMI was 25.9 kg/m² (95%CI 25.6;26.2) in men and 25.4 kg/m2 (95%CI 25.0;25.7) in women. The mean WC was 79.3 cm (95%CI 78.4;80.1) in men and 88.4 cm (95%CI 87.6;89.2) in women. A positive association was found between the presence of less than 10 teeth in at least one arch and increased mean BMI and WC after adjusting for education level, self-reported diabetes, gender and monthly per capita income. However, this association was lost when the variable age was included in the model. The prevalence of general obesity was 50% higher in those with less than 10 teeth in at least one arch when compared with those with 10 or more teeth in both arches after adjusting for education level, self-reported diabetes and monthly per capita family income. However, the statistical significance was lost after controlling for age. CONCLUSIONS: Obesity was associated with number of teeth, though it depended on the participants' age groups.
Resumo:
Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Civil
Resumo:
In this work we present a classification of some of the existing Penalty Methods (denominated the Exact Penalty Methods) and describe some of its limitations and estimated. With these methods we can solve problems of optimization with continuous, discrete and mixing constrains, without requiring continuity, differentiability or convexity. The boarding consists of transforming the original problem, in a sequence of problems without constrains, derivate of the initial, making possible its resolution for the methods known for this type of problems. Thus, the Penalty Methods can be used as the first step for the resolution of constrained problems for methods typically used in by unconstrained problems. The work finishes discussing a new class of Penalty Methods, for nonlinear optimization, that adjust the penalty parameter dynamically.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente