937 resultados para rank-based procedure


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a systematic process for building a Fault Diagnoser (FD), based on Petri Nets (PNs) which has been applied to a small helicopter. This novel tool is able to detect both intermittent and permanent faults. The work carried out is discussed from theoretical and practical point of view. The procedure begins with a division of the whole system into subsystems, which are the devices that have to be modeled by using PN, considering both the normal and fault operations. Subsequently, the models are integrated into a global Petri Net diagnoser (PND) that is able to monitor a whole helicopter and show critical variables to the operator in order to determine the UAV health, preventing accidents in this manner. A Data Acquisition System (DAQ) has been designed for collecting data during the flights and feeding PN diagnoser with them. Several real flights (nominal or under failure) have been carried out to perform the diagnoser setup and verify its performance. A summary of the validation results obtained during real flight tests is also included. An extensive use of this tool will improve preventive maintenance protocols for UAVs (especially helicopters) and allow establishing recommendations in regulations

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proteins can be very tolerant to amino acid substitution, even within their core. Understanding the factors responsible for this behavior is of critical importance for protein engineering and design. Mutations in proteins have been quantified in terms of the changes in stability they induce. For example, guest residues in specific secondary structures have been used as probes of conformational preferences of amino acids, yielding propensity scales. Predicting these amino acid propensities would be a good test of any new potential energy functions used to mimic protein stability. We have recently developed a protein design procedure that optimizes whole sequences for a given target conformation based on the knowledge of the template backbone and on a semiempirical potential energy function. This energy function is purely physical, including steric interactions based on a Lennard-Jones potential, electrostatics based on a Coulomb potential, and hydrophobicity in the form of an environment free energy based on accessible surface area and interatomic contact areas. Sequences designed by this procedure for 10 different proteins were analyzed to extract conformational preferences for amino acids. The resulting structure-based propensity scales show significant agreements with experimental propensity scale values, both for α-helices and β-sheets. These results indicate that amino acid conformational preferences are a natural consequence of the potential energy we use. This confirms the accuracy of our potential and indicates that such preferences should not be added as a design criterion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the number of protein folds is quite limited, a mode of analysis that will be increasingly common in the future, especially with the advent of structural genomics, is to survey and re-survey the finite parts list of folds from an expanding number of perspectives. We have developed a new resource, called PartsList, that lets one dynamically perform these comparative fold surveys. It is available on the web at http://bioinfo.mbb.yale.edu/partslist and http://www.partslist.org. The system is based on the existing fold classifications and functions as a form of companion annotation for them, providing ‘global views’ of many already completed fold surveys. The central idea in the system is that of comparison through ranking; PartsList will rank the approximately 420 folds based on more than 180 attributes. These include: (i) occurrence in a number of completely sequenced genomes (e.g. it will show the most common folds in the worm versus yeast); (ii) occurrence in the structure databank (e.g. most common folds in the PDB); (iii) both absolute and relative gene expression information (e.g. most changing folds in expression over the cell cycle); (iv) protein–protein interactions, based on experimental data in yeast and comprehensive PDB surveys (e.g. most interacting fold); (v) sensitivity to inserted transposons; (vi) the number of functions associated with the fold (e.g. most multi-functional folds); (vii) amino acid composition (e.g. most Cys-rich folds); (viii) protein motions (e.g. most mobile folds); and (ix) the level of similarity based on a comprehensive set of structural alignments (e.g. most structurally variable folds). The integration of whole-genome expression and protein–protein interaction data with structural information is a particularly novel feature of our system. We provide three ways of visualizing the rankings: a profiler emphasizing the progression of high and low ranks across many pre-selected attributes, a dynamic comparer for custom comparisons and a numerical rankings correlator. These allow one to directly compare very different attributes of a fold (e.g. expression level, genome occurrence and maximum motion) in the uniform numerical format of ranks. This uniform framework, in turn, highlights the way that the frequency of many of the attributes falls off with approximate power-law behavior (i.e. according to V–b, for attribute value V and constant exponent b), with a few folds having large values and most having small values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Establishment of loss-of-function phenotypes is often a key step in determining the biological function of a gene. We describe a procedure to obtain mutant petunia plants in which a specific gene with known sequence is inactivated by the transposable element dTph1. Leaves are collected from batches of 1000 plants with highly active dTph1 elements, pooled according to a three-dimensional matrix, and screened by PCR using a transposon- and a gene-specific primer. In this way individual plants with a dTph1 insertion can be identified by analysis of about 30 PCRs. We found insertion alleles for various genes at a frequency of about 1 in 1000 plants. The plant population can be preserved by selfing all the plants, so that it can be screened for insertions in many genes over a prolonged period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As análises biplot que utilizam os modelos de efeitos principais aditivos com inter- ação multiplicativa (AMMI) requerem matrizes de dados completas, mas, frequentemente os ensaios multiambientais apresentam dados faltantes. Nesta tese são propostas novas metodologias de imputação simples e múltipla que podem ser usadas para analisar da- dos desbalanceados em experimentos com interação genótipo por ambiente (G×E). A primeira, é uma nova extensão do método de validação cruzada por autovetor (Bro et al, 2008). A segunda, corresponde a um novo algoritmo não-paramétrico obtido por meio de modificações no método de imputação simples desenvolvido por Yan (2013). Também é incluído um estudo que considera sistemas de imputação recentemente relatados na literatura e os compara com o procedimento clássico recomendado para imputação em ensaios (G×E), ou seja, a combinação do algoritmo de Esperança-Maximização com os modelos AMMI ou EM-AMMI. Por último, são fornecidas generalizações da imputação simples descrita por Arciniegas-Alarcón et al. (2010) que mistura regressão com aproximação de posto inferior de uma matriz. Todas as metodologias têm como base a decomposição por valores singulares (DVS), portanto, são livres de pressuposições distribucionais ou estruturais. Para determinar o desempenho dos novos esquemas de imputação foram realizadas simulações baseadas em conjuntos de dados reais de diferentes espécies, com valores re- tirados aleatoriamente em diferentes porcentagens e a qualidade das imputações avaliada com distintas estatísticas. Concluiu-se que a DVS constitui uma ferramenta útil e flexível na construção de técnicas eficientes que contornem o problema de perda de informação em matrizes experimentais.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relative popularity of acceptance and commitment therapy (ACT) has grown in recent years, and inspired the development of contemporary acceptance-based treatment approaches. Acceptance-based therapies differ from traditional cognitive- behavior therapy (CBT) on pragmatic grounds, the import of which implicates the purpose of therapy. CBT utilizes exposure and cognitive change techniques primarily in service of symptom change outcomes; whereas, ACT utilizes exposure and acceptance for purposes of promoting psychological flexibility in the pursuit of personal values. The purpose of this meta-analytic study was to determine the relative efficacy of acceptance- based versus symptom-change behavioral approaches with anxiety disorders and to quantify this impact. A comprehensive literature search turned up 18 studies that met inclusion criteria for this analysis. An effect size was calculated using the standardized mean gain procedure for both the acceptance-based and symptom-change approaches, along with the waitlist control groups. The results demonstrate a large effect size for the acceptance-based approach (Weighted mean ES = .83) and a medium effect size for symptom-change approach (Weighted mean ES = .60). The waitlist control groups demonstrated a small effect size (Weighted mean ES = .24). Based on this review, it is suggested that graduate and internship programs in Clinical Psychology should promote evidence-based training in the use of acceptance-inspired behavioral therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many translation quality standards have been implemented to regulate the provision and procurement of language services. However, in the absence of a standardized procedure to certify U.S. language service providers (LSPs), the industry lacks consensus with regard to requirements, procedures, and expectations. This project establishes the need for such a procedure and proposes an LSP Certification Procedure based on existing quality standards. Through a review and analysis of existing translation quality standards, an interview with a key stakeholder, and the presentation of an LSP Certification Procedure, this project concludes that the U.S. language services industry requires a procedure to certify LSPs and that such a procedure may be designed and implemented based on existing standards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to the delineation of local labor markets based on evolutionary computation. The aim of the exercise is the division of a given territory into functional regions based on travel-to-work flows. Such regions are defined so that a high degree of inter-regional separation and of intra-regional integration in both cases in terms of commuting flows is guaranteed. Additional requirements include the absence of overlap between delineated regions and the exhaustive coverage of the whole territory. The procedure is based on the maximization of a fitness function that measures aggregate intra-region interaction under constraints of inter-region separation and minimum size. In the experimentation stage, two variations of the fitness function are used, and the process is also applied as a final stage for the optimization of the results from one of the most successful existing methods, which are used by the British authorities for the delineation of travel-to-work areas (TTWAs). The empirical exercise is conducted using real data for a sufficiently large territory that is considered to be representative given the density and variety of travel-to-work patterns that it embraces. The paper includes the quantitative comparison with alternative traditional methods, the assessment of the performance of the set of operators which has been specifically designed to handle the regionalization problem and the evaluation of the convergence process. The robustness of the solutions, something crucial in a research and policy-making context, is also discussed in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses the productivity growth of the SUMA tax offices located in Spain evolved between 2004 and 2006 by using Malmquist Index based on Data Envelopment Analysis (DEA) models. It goes a step forward by smoothed bootstrap procedure which improves the quality of the results by generalising the samples, so that the conclusions obtained from them can be applied in order to increase productivity levels. Additionally, the productivity effect is divided into two different components, efficiency and technological change, with the objective of helping to clarify the role played by either the managers or the level of technology in the final performance figures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel approach is presented, whereby gold nanostructured screen-printed carbon electrodes (SPCnAuEs) are combined with in-situ ionic liquid formation dispersive liquid–liquid microextraction (in-situ IL-DLLME) and microvolume back-extraction for the determination of mercury in water samples. In-situ IL-DLLME is based on a simple metathesis reaction between a water-miscible IL and a salt to form a water-immiscible IL into sample solution. Mercury complex with ammonium pyrrolidinedithiocarbamate is extracted from sample solution into the water-immiscible IL formed in-situ. Then, an ultrasound-assisted procedure is employed to back-extract the mercury into 10 µL of a 4 M HCl aqueous solution, which is finally analyzed using SPCnAuEs. Sample preparation methodology was optimized using a multivariate optimization strategy. Under optimized conditions, a linear range between 0.5 and 10 µg L−1 was obtained with a correlation coefficient of 0.997 for six calibration points. The limit of detection obtained was 0.2 µg L−1, which is lower than the threshold value established by the Environmental Protection Agency and European Union (i.e., 2 µg L−1 and 1 µg L−1, respectively). The repeatability of the proposed method was evaluated at two different spiking levels (3 and 10 µg L−1) and a coefficient of variation of 13% was obtained in both cases. The performance of the proposed methodology was evaluated in real-world water samples including tap water, bottled water, river water and industrial wastewater. Relative recoveries between 95% and 108% were obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel procedure for the preparation of solid Pd(II)-based catalysts consisting of the anchorage of designed Pd(II)-complexes on an activated carbon (AC) surface is reported. Two molecules of the Ar–S–F type (where Ar is a plane-pyrimidine moiety, F a Pd(II)-ligand and S an aliphatic linker) differing in F, were grafted on AC by π–π stacking of the Ar moiety and the graphene planes of the AC, thus favouring the retaining of the metal-complexing ability of F. Adsorption of Pd(II) by the AC/Ar–S–F hybrids occurs via Pd(II)-complexation by F. After deep characterization, the catalytic activities of the AC/Ar–S–F/Pd(II) hybrids on the hydrogenation of 1-octene in methanol as a catalytic test were evaluated. 100% conversion to n-octane at T = 323.1 K and P = 15 bar, was obtained with both catalysts and most of Pd(II) was reduced to Pd(0) nanoparticles, which remained on the AC surface. Reusing the catalysts in three additional cycles reveals that the catalyst bearing the F ligand with a larger Pd-complexing ability showed no loss of activity (100% conversion to n-octane) which is assigned to its larger structural stability. The catalyst with the weaker F ligand underwent a progressive loss of activity (from 100% to 79% in four cycles), due to the constant aggregation of the Pd(0) nanoparticles. Milder conditions, T = 303.1 K and P = 1.5 bar, prevent the aggregation of the Pd(0) nanoparticles in this catalyst allowing the retention of the high catalytic efficiency (100% conversion) in four reaction cycles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stroke is a leading cause of death and permanent disability worldwide, affecting millions of individuals. Traditional clinical scores for assessment of stroke-related impairments are inherently subjective and limited by inter-rater and intra-rater reliability, as well as floor and ceiling effects. In contrast, robotic technologies provide objective, highly repeatable tools for quantification of neurological impairments following stroke. KINARM is an exoskeleton robotic device that provides objective, reliable tools for assessment of sensorimotor, proprioceptive and cognitive brain function by means of a battery of behavioral tasks. As such, KINARM is particularly useful for assessment of neurological impairments following stroke. This thesis introduces a computational framework for assessment of neurological impairments using the data provided by KINARM. This is done by achieving two main objectives. First, to investigate how robotic measurements can be used to estimate current and future abilities to perform daily activities for subjects with stroke. We are able to predict clinical scores related to activities of daily living at present and future time points using a set of robotic biomarkers. The findings of this analysis provide a proof of principle that robotic evaluation can be an effective tool for clinical decision support and target-based rehabilitation therapy. The second main objective of this thesis is to address the emerging problem of long assessment time, which can potentially lead to fatigue when assessing subjects with stroke. To address this issue, we examine two time reduction strategies. The first strategy focuses on task selection, whereby KINARM tasks are arranged in a hierarchical structure so that an earlier task in the assessment procedure can be used to decide whether or not subsequent tasks should be performed. The second strategy focuses on time reduction on the longest two individual KINARM tasks. Both reduction strategies are shown to provide significant time savings, ranging from 30% to 90% using task selection and 50% using individual task reductions, thereby establishing a framework for reduction of assessment time on a broader set of KINARM tasks. All in all, findings of this thesis establish an improved platform for diagnosis and prognosis of stroke using robot-based biomarkers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population balances of polymer species in terms 'of discrete transforms with respect to counts of groups lead to tractable first order partial differential equations when ali rate constants are independent of chain length and loop formation is negligible [l]. Average molecular weights in the absence ofgelation are long known to be readily found through integration of an initial value problem. The extension to size distribution prediction is also feasible, but its performance is often lower to the one provided by methods based upon real chain length domain [2]. Moreover, the absence ofagood starting procedure and a higher numerical sensitivity hás decisively impaired its application to non-linear reversibly deactivated polymerizations, namely NMRP [3].