934 resultados para Direct modified method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

No presente trabalho foram avaliados processos alternativos de dessalinização visando a recuperação e reuso da água contida em salmouras concentradas, sendo o processo de cristalização assistida por destilação por membranas (MDC) investigado com profundidade. Foi desenvolvido um modelo diferencial para o processo de destilação por membranas por contato direto (DCMD), contemplando métodos termodinâmicos rigorosos para sistemas aquosos de eletrólitos fortes, bem como mecanismos de transferência de calor e massa e efeitos de polarização de temperatura e concentração característicos deste processo de separação. Com base em simulações realizadas a partir do modelo matemático assim desenvolvido, foram investigados os principais parâmetros que influenciam o projeto de um módulo de membranas para DCMD. O modelo foi posteriormente estendido com equações de balanço de massa e energia adicionais para incluir a operação de cristalização e desta forma representar o processo de MDC. De posse dos resultados das simulações e do modelo estendido, foi desenvolvido um método hierárquico para o projeto de processos de MDC, com o objetivo de conferir características de rastreabilidade e repetibilidade a esta atividade. Ainda a partir do modelo MDC foram discutidos aspectos importantes em MDC como a possibilidade de nucleação e crescimento de cristais sobre a superfície das membranas, bem como o comportamento do processo com sais com diferentes características de solubilidade e largura da zona metaestável. Verificou-se que para sais cuja solubilidade varia muito pouco com a temperatura e que possuem zona metaestável com pequena largura, caso do NaCl, a operação com resfriamento no cristalizador não é viável pois aumenta excessivamente o consumo energético do processo, sendo nesses casos preferível a operação \"isotérmica\" - sem resfriamento no cristalizador - e o convívio com a possibilidade de nucleação no interior do módulo. No extremo oposto, observou-se que para sais com grande variabilidade da solubilidade com a temperatura, um pequeno resfriamento no cristalizador é suficiente para garantir condições de subsaturação no interior do módulo, sem grande ônus energético para o processo. No caso de sais com pequena variabilidade da solubilidade com a temperatura, mas com largura da zona metaestável elevada, existe certo ônus energético para a operação com resfriamento do cristalizador, porém não tão acentuado como no caso de sais com zona metaestável estreita. Foi proposto um fluxograma alternativo para o processo de MDC, onde foi introduzido um circuito de pré-concentração da alimentação antes do circuito de cristalização, para o caso de alimentação com soluções muito diluídas. Este esquema proporcionou um aumento do fluxo permeado global do processo e consequentemente uma redução na área total de membrana requerida. Verificou-se que através do processo com préconcentração da alimentação de 5% até 10% em massa - no caso de dessalinização de uma solução de NaCl - foi possível reduzir-se a área total da membrana em 27,1% e o consumo energético específico do processo em 10,6%, quando comparado ao processo sem pré-concentração. Foram desenvolvidas ferramentas úteis para o projeto de processos de dessalinização por MDC em escala industrial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characterization of sound absorbing materials is essential to predict its acoustic behaviour. The most commonly used models to do so consider the flow resistivity, porosity, and average fibre diameter as parameters to determine the acoustic impedance and sound absorbing coefficient. Besides direct experimental techniques, numerical approaches appear to be an alternative to estimate the material’s parameters. In this work an inverse numerical method to obtain some parameters of a fibrous material is presented. Using measurements of the normal incidence sound absorption coefficient and then using the model proposed by Voronina, subsequent application of basic minimization techniques allows one to obtain the porosity, average fibre diameter and density of a sound absorbing material. The numerical results agree fairly well with the experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Array measurements have become a valuable tool for site response characterization in a non-invasive way. The array design, i.e. size, geometry and number of stations, has a great influence in the quality of the obtained results. From the previous parameters, the number of available stations uses to be the main limitation for the field experiments, because of the economical and logistical constraints that it involves. Sometimes, from the initially planned array layout, carefully designed before the fieldwork campaign, one or more stations do not work properly, modifying the prearranged geometry. Whereas other times, there is not possible to set up the desired array layout, because of the lack of stations. Therefore, for a planned array layout, the number of operative stations and their arrangement in the array become a crucial point in the acquisition stage and subsequently in the dispersion curve estimation. In this paper we carry out an experimental work to analyze which is the minimum number of stations that would provide reliable dispersion curves for three prearranged array configurations (triangular, circular with central station and polygonal geometries). For the optimization study, we analyze together the theoretical array responses and the experimental dispersion curves obtained through the f-k method. In the case of the f-k method, we compare the dispersion curves obtained for the original or prearranged arrays with the ones obtained for the modified arrays, i.e. the dispersion curves obtained when a certain number of stations n is removed, each time, from the original layout of X geophones. The comparison is evaluated by means of a misfit function, which helps us to determine how constrained are the studied geometries by stations removing and which station or combination of stations affect more to the array capability when they are not available. All this information might be crucial to improve future array designs, determining when it is possible to optimize the number of arranged stations without losing the reliability of the obtained results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new framework based on optimal control to define new dynamic visual controllers to carry out the guidance of any serial link structure. The proposed general method employs optimal control to obtain the desired behaviour in the joint space based on an indicated cost function which determines how the control effort is distributed over the joints. The proposed approach allows the development of new direct visual controllers for any mechanical joint system with redundancy. Finally, authors show experimental results and verifications on a real robotic system for some derived controllers obtained from the control framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From the Introduction. The aim of the present “letter” is to provoke, rather than to prove. It is intended to further stimulate the – already well engaged – scientific dialogue on the open method of coordination (OMC).1 This explains why some of the arguments put forward are not entirely new, while others are overstretched. This contribution, belated as it is entering into the debate, has the benefit of some hindsight. This hindsight is based on three factors (in chronological order): a) the fact that the author has participated himself as a member of a national delegation in one of the OMC-induced benchmarking exercises (only to see the final evaluation report getting lost in the Labyrinth of the national bureaucracy, despite the fact that it contained an overall favorable assessment), as well as in a OECD led exercise of coordination, concerning regulatory reform; b) the extremely rich and knowledgeable academic input, offering a very promising theoretical background for the OMC; and c) some recent empirical research as to the efficiency of the OMC, the accounts of which are, to say the least, ambiguous. This recent empirical research grounds the basic assumption of the present paper: that the OMC has only restricted, if not negligible, direct effects in the short term, while it may have some indirect effects in the medium-long term (2). On the basis of this assumption a series of arguments against the current “spread” of the OMC will be put forward (3). Some proposals on how to neutralize some of the shortfalls of the OMC will follow (4).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Gini index is the most common method for estimating the level of income inequality in countries. In this paper we suggest a simple modification that takes into account the moderating effect of in-kind government benefits. Unlike other studies that use micro level data that is rarely available for many countries or over a period of time, the proposed modified Gini index could be calculated using just the regularly available data for each country. Such data includes the original Gini coefficient, government consumption expenditures, GDP and total tax revenue as a percentage of GDP. This modified version of the Gini index allows us to calculate the level of inequality more precisely, and make better comparisons between countries and over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many multifactorial biologic effects, particularly in the context of complex human diseases, are still poorly understood. At the same time, the systematic acquisition of multivariate data has become increasingly easy. The use of such data to analyze and model complex phenotypes, however, remains a challenge. Here, a new analytic approach is described, termed coreferentiality, together with an appropriate statistical test. Coreferentiality is the indirect relation of two variables of functional interest in respect to whether they parallel each other in their respective relatedness to multivariate reference data, which can be informative for a complex effect or phenotype. It is shown that the power of coreferentiality testing is comparable to multiple regression analysis, sufficient even when reference data are informative only to a relatively small extent of 2.5%, and clearly exceeding the power of simple bivariate correlation testing. Thus, coreferentiality testing uses the increased power of multivariate analysis, however, in order to address a more straightforward interpretable bivariate relatedness. Systematic application of this approach could substantially improve the analysis and modeling of complex phenotypes, particularly in the context of human study where addressing functional hypotheses by direct experimentation is often difficult.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All the structures designed by engineers are vulnerable to natural disasters including floods and earthquakes. The energy released during strong ground motions should be dissipated by structural elements. Before 1990’s, this energy was expected to be dissipated through the beams and columns which at the same time were a part of gravity-load-resisting system. However, the main disadvantage of this idea was that gravity-resisting-frame was not repairable. Hence, during 1990’s, the idea of designing passive energy dissipation systems, including dampers, emerged. At the beginning, main problem was lack of guidelines for passive energy dissipation systems. Although till 2000 many guidelines and procedures where published, yet most of them were based on complicated analysis which was not so convenient for engineers and practitioners. In order to solve this problem recently some alternative design methods are proposed including 1. Lopez Garcia (2001) simple procedure for optimal damper configuration in MDOF structures 2. Christopoulos and Filiatrault (2006) trial and error procedure 3. Silvestri et al. (2010) Five-Step Method. 4. Palermo et al. (2015) Direct Five-Step Method. 5. Palermo et al. (2016) Simplified Equivalent Static Analysis (ESA). In this study, effectiveness and differences between last three alternative methods have been evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tricyclo-DNA (tcDNA) is a sugar-modified analogue of DNA currently tested for the treatment of Duchenne muscular dystrophy in an antisense approach. Tandem mass spectrometry plays a key role in modern medical diagnostics and has become a widespread technique for the structure elucidation and quantification of antisense oligonucleotides. Herein, mechanistic aspects of the fragmentation of tcDNA are discussed, which lay the basis for reliable sequencing and quantification of the antisense oligonucleotide. Excellent selectivity of tcDNA for complementary RNA is demonstrated in direct competition experiments. Moreover, the kinetic stability and fragmentation pattern of matched and mismatched tcDNA heteroduplexes were investigated and compared with non-modified DNA and RNA duplexes. Although the separation of the constituting strands is the entropy-favored fragmentation pathway of all nucleic acid duplexes, it was found to be only a minor pathway of tcDNA duplexes. The modified hybrid duplexes preferentially undergo neutral base loss and backbone cleavage. This difference is due to the low activation entropy for the strand dissociation of modified duplexes that arises from the conformational constraint of the tc-sugar-moiety. The low activation entropy results in a relatively high free activation enthalpy for the dissociation comparable to the free activation enthalpy of the alternative reaction pathway, the release of a nucleobase. The gas-phase behavior of tcDNA duplexes illustrates the impact of the activation entropy on the fragmentation kinetics and suggests that tandem mass spectrometric experiments are not suited to determine the relative stability of different types of nucleic acid duplexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: This study aimed to determine the reasons for dentists' choice of materials, in particular amalgam and resin composite, in Australia. Method: A questionnaire was developed to elicit this information. The names and addresses of 1000 dentists in Australia were selected at random. The questionnaire was mailed to these dentists with an explanatory letter and reply-paid envelope. Results: A total of 560 replies were received. Regarding choice of material, 99 per cent of respondents cited clinical indication as an influencing factor, although patients' aesthetic demands (99 per cent), patients' financial situation (82 per cent), and lecturers' suggestions (72 per cent) were also reported to influence respondents' choice of materials. Twelve per cent of respondents used composite 'always', 29 per cent 'often', 32 per cent 'sometimes', 23 per cent 'seldom' and 4 per cent 'never' in extensive load-bearing cavities in molar teeth. For composite restorations in posterior teeth, 84 per cent 'always', 'often' or 'sometimes' used the total etch technique, 84 per cent used a thick glass-ionomer layer and 36 per cent never used rubber dam. Fifty-nine per cent of respondents reported a decreased use of amalgam over the previous five years. Sixty-eight per cent of respondents agreed with the statement 'discontinuation of amalgam restricts a dentist's ability to adequately treat patients'. Seventy-five per cent considered that the growth in the use of composites increased the total cost of oral health care. Conclusions: Of the respondents from Australia 73 per cent place large composite restorations in molar teeth and their choice of material is influenced greatly by clinical indications, and patients' aesthetic demands.