948 resultados para best estimate method
Resumo:
The present study identifies quantitative trait loci (QTLs) in response to an experimental infection with the parasite responsible for bonamiosis, Bonamia ostreae, in two segregating families of the European flat oyster, Ostrea edulis. We first constructed a genetic-linkage map for each studied family and improved the existing genetic-linkage map for the European flat oyster with a set of SNP markers. This latter map now combines the best accuracy and the best estimate of the genome coverage available for an oyster species. Secondly, by comparing the QTLs detected in this study with those previously published for O. edulis in similar experimental conditions, we identified several potential QTLs that were identical between the different families, and also new specific QTLs. We also detected, within the confidence interval of several QTL regions, some previously predicted candidate genes differentially expressed during an infection with B. ostreae, providing new candidate genome regions which should now be studied more specifically.
Resumo:
Cada vez mais, os principais objetivos na indústria é a produção a baixo custo, com a máxima qualidade e com o tempo de fabrico o mais curto possível. Para atingir esta meta, a indústria recorre, frequentemente, às máquinas de comando numérico (CNC), uma vez que com esta tecnologia torna se capaz alcançar uma elevada precisão e um tempo de processamento mais baixo. As máquinas ferramentas CNC podem ser aplicadas em diferentes processos de maquinagem, tais como: torneamento, fresagem, furação, entre outros. De todos estes processos, o mais utilizado é a fresagem devido à sua versatilidade. Utiliza-se normalmente este processo para maquinar materiais metálicos como é o caso do aço e dos ferros fundidos. Neste trabalho, são analisados os efeitos da variação de quatro parâmetros no processo de fresagem (velocidade de corte, velocidade de avanço, penetração radial e penetração axial), individualmente e a interação entre alguns deles, na variação da rugosidade num aço endurecido (aço 12738). Para essa análise são utilizados dois métodos de otimização: o método de Taguchi e o método das superfícies. O primeiro método foi utilizado para diminuir o número de combinações possíveis e, consequentemente, o número de ensaios a realizar é denominado por método de Taguchi. O método das superfícies ou método das superfícies de resposta (RSM) foi utilizado com o intuito de comparar os resultados obtidos com o método de Taguchi, de acordo com alguns trabalhos referidos na bibliografia especializada, o RSM converge mais rapidamente para um valor ótimo. O método de Taguchi é muito conhecido no setor industrial onde é utilizado para o controlo de qualidade. Apresenta conceitos interessantes, tais como robustez e perda de qualidade, sendo bastante útil para identificar variações do sistema de produção, durante o processo industrial, quantificando a variação e permitindo eliminar os fatores indesejáveis. Com este método foi vi construída uma matriz ortogonal L16 e para cada parâmetro foram definidos dois níveis diferentes e realizados dezasseis ensaios. Após cada ensaio, faz-se a medição superficial da rugosidade da peça. Com base nos resultados obtidos das medições da rugosidade é feito um tratamento estatístico dos dados através da análise de variância (Anova) a fim de determinar a influência de cada um dos parâmetros na rugosidade superficial. Verificou-se que a rugosidade mínima medida foi de 1,05m. Neste estudo foi também determinada a contribuição de cada um dos parâmetros de maquinagem e a sua interação. A análise dos valores de “F-ratio” (Anova) revela que os fatores mais importantes são a profundidade de corte radial e da interação entre profundidade de corte radial e profundidade de corte axial para minimizar a rugosidade da superfície. Estes têm contribuições de cerca de 30% e 24%, respetivamente. Numa segunda etapa este mesmo estudo foi realizado pelo método das superfícies, a fim de comparar os resultados por estes dois métodos e verificar qual o melhor método de otimização para minimizar a rugosidade. A metodologia das superfícies de resposta é baseada num conjunto de técnicas matemáticas e estatísticas úteis para modelar e analisar problemas em que a resposta de interesse é influenciada por diversas variáveis e cujo objetivo é otimizar essa resposta. Para este método apenas foram realizados cinco ensaios, ao contrário de Taguchi, uma vez que apenas em cinco ensaios consegue-se valores de rugosidade mais baixos do que a média da rugosidade no método de Taguchi. O valor mais baixo por este método foi de 1,03μm. Assim, conclui-se que RSM é um método de otimização mais adequado do que Taguchi para os ensaios realizados. Foram obtidos melhores resultados num menor número de ensaios, o que implica menos desgaste da ferramenta, menor tempo de processamento e uma redução significativa do material utilizado.
Resumo:
The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.^
Resumo:
Laser ablation ICP-MS U–Pb analyses were conducted on detrital zircons of Triassic sandstone and conglomerate from the Lusitanian basin in order to: i) document the age spectra of detrital zircon; ii) compare U–Pb detrital zircon ages with previous published data obtained from Upper Carboniferous, Ordovician, Cambrian and Ediacaran sedimentary rocks of the pre-Mesozoic basement of western Iberia; iii) discuss potential sources; and iv) test the hypothesis of sedimentary recycling. U–Pb dating of zircons established a maximum depositional age for this deposit as Permian (ca. 296Ma),which is about sixty million years older compared to the fossil content recognized in previous studies (Upper Triassic). The distribution of detrital zircon ages obtained points to common source areas: the Ossa–Morena and Central Iberian zones that outcrop in and close to the Porto–Tomar fault zone. The high degree of immaturity and evidence of little transport of the Triassic sediment suggests that granite may constitute primary crystalline sources. The Carboniferous age of ca. 330 Ma for the best estimate of crystallization for a granite pebble in a Triassic conglomerate and the Permian–Carboniferous ages (ca. 315Ma) found in detrital zircons provide evidence of the denudation of Variscan and Cimmerian granites during the infilling of continental rift basins in western Iberia. The zircon age spectra found in Triassic strata are also the result of recycling from the Upper Carboniferous Buçaco basin,which probably acted as an intermediate sediment repository.U–Pb data in this study suggest that the detritus from the Triassic sandstone and conglomerate of the Lusitanian basin is derived fromlocal source areas with features typical of Gondwana,with no sediment from external sources from Laurussia or southwestern Iberia.
Resumo:
Gauging the maximum willingness to pay (WTP) of a product accurately is a critical success factor that determines not only market performance but also financial results. A number of approaches have therefore been developed to accurately estimate consumers’ willingness to pay. Here, four commonly used measurement approaches are compared using real purchase data as a benchmark. The relative strengths of each method are analyzed on the basis of statistical criteria and, more importantly, on their potential to predict managerially relevant criteria such as optimal price, quantity and profit. The results show a slight advantage of incentive-aligned approaches though the market settings need to be considered to choose the best-fitting procedure.
Resumo:
A combination of trajectory sensitivity method and master-slave synchronization was proposed to parameter estimation of nonlinear systems. It was shown that master-slave coupling increases the robustness of the trajectory sensitivity algorithm with respect to the initial guess of parameters. Since synchronization is not a guarantee that the estimation process converges to the correct parameters, a conditional test that guarantees that the new combined methodology estimates the true values of parameters was proposed. This conditional test was successfully applied to Lorenz's and Chua's systems, and the proposed parameter estimation algorithm has shown to be very robust with respect to parameter initial guesses and measurement noise for these examples. Copyright (C) 2009 Elmer P. T. Cari et al.
Resumo:
The crosstalk phenomenon consists in recording the volume-conducted electromyographic activity of muscles other than that under study. This interference may impair the correct interpretation of the results in a variety of experiments. A new protocol is presented here for crosstalk assessment between two muscles based on changes in their electrical activity following a reflex discharge in one of the muscles in response to nerve stimulation. A reflex compound muscle action potential (H-reflex) was used to induce a silent period in the muscle that causes the crosstalk, called here the remote muscle. The rationale is that if the activity recorded in the target muscle is influenced by a distant source (the remote muscle) a silent period observed in the electromyogram (EMG) of the remote muscle would coincide with a decrease in the EMG activity of the target muscle. The new crosstalk index is evaluated based on the root mean square (RMS) values of the EMGs obtained in two distinct periods (background EMG and silent period) of both the remote and the target muscles. In the present work the application focused on the estimation of the degree of crosstalk from the soleus muscle to the tibialis anterior muscle during quiet stance. However, the technique may be extended to other pairs of muscles provided a silent period may be evoked in one of them. (C) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.
Resumo:
Axial vertebral rotation, an important parameter in the assessment of scoliosis may be identified on X-ray images. In line with the advances in the field of digital radiography, hospitals have been increasingly using this technique. The objective of the present study was to evaluate the reliability of computer-processed rotation measurements obtained from digital radiographs. A software program was therefore developed, which is able to digitally reproduce the methods of Perdriolle and Raimondi and to calculate semi-automatically the rotation degree of vertebra on digital radiographs. Three independent observers estimated vertebral rotation employing both the digital and the traditional manual methods. Compared to the traditional method, the digital assessment showed a 43% smaller error and a stronger correlation. In conclusion, the digital method seems to be reliable and enhance the accuracy and precision of vertebral rotation measurements.
Resumo:
Objectives The methods currently available for the measurement of energy expenditure in patients, such as indirect calorimetry and double-labelled water, are expensive and are limited in Brazil to research projects. Thus, equations for the prediction of resting metabolic rate appear to be a viable alternative for clinical practice. However, there are no specific equations for the Brazilian population and few studies have been conducted on Brazilian women in the climacteric period using existing and commonly applied equations. On this basis, the objective of the present study was to investigate the concordance between the predictive equations most frequently used and indirect calorimetry for the measurement of resting metabolic rate. Methods We calculated the St. Laurent concordance correlation coefficient between the equations and resting metabolic rate calculated by indirect calorimetry in 46 climacteric women. Results The equation showing the best concordance was that of the FAO/WHO/UNU formula (0.63), which proved to be better than the Harris & Benedict equation (0.55) for the sample studied. Conclusions On the basis of the results of the present study, we conclude that the FAO/WHO/UNU formula can be used to predict better the resting metabolic rate of climacteric women. Further studies using more homogeneous and larger samples are needed to permit the use of the FAO/WHO/UNU formula for this population group with greater accuracy.
Resumo:
OBJECTIVES: 1. To critically evaluate a variety of mathematical methods of calculating effective population size (Ne) by conducting comprehensive computer simulations and by analysis of empirical data collected from the Moreton Bay population of tiger prawns. 2. To lay the groundwork for the application of the technology in the NPF. 3. To produce software for the calculation of Ne, and to make it widely available.
Resumo:
Introduction / Aims: Adopting the important decisions represents a specific task of the manager. An efficient manager takes these decisions during a sistematic process with well-defined elements, each with a precise order. In the pharmaceutical practice and business, in the supply process of the pharmacies, there are situations when the medicine distributors offer a certain discount, but require payment in a shorter period of time. In these cases, the analysis of the offer can be made with the help of the decision tree method, which permits identifying the decision offering the best possible result in a given situation. The aims of the research have been the analysis of the product offers of many different suppliers and the establishing of the most advantageous ways of pharmacy supplying. Material / Methods: There have been studied the general product offers of the following medical stores: A&G Med, Farmanord, Farmexim, Mediplus, Montero and Relad. In the case of medicine offers including a discount, the decision tree method has been applied in order to select the most advantageous offers. The Decision Tree is a management method used in taking the right decisions and it is generally used when one needs to evaluate the decisions that involve a series of stages. The tree diagram is used in order to look for the most efficient means to attain a specific goal. The decision trees are the most probabilistic methods, useful when adopting risk taking decisions. Results: The results of the analysis on the tree diagrams have indicated the fact that purchasing medicines with discount (1%, 10%, 15%) and payment in a shorter time interval (120 days) is more profitable than purchasing without a discount and payment in a longer time interval (160 days). Discussion / Conclusion: Depending on the results of the tree diagram analysis, the pharmacies would purchase from the selected suppliers. The research has shown that the decision tree method represents a valuable work instrument in choosing the best ways for supplying pharmacies and it is very useful to the specialists from the pharmaceutical field, pharmaceutical management, to medicine suppliers, pharmacy practitioners from the community pharmacies and especially to pharmacy managers, chief – pharmacists.
Resumo:
n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.
Resumo:
BACKGROUND: Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. OBJECTIVE: To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. MATERIALS AND METHODS: Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI(vol) 4.8-7.9 mGy, DLP 37.1-178.9 mGy·cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. RESULTS: The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. CONCLUSION: Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone.
Resumo:
We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.