969 resultados para Supplier selection problem
Resumo:
The knowledge of the atomic structure of clusters composed by few atoms is a basic prerequisite to obtain insights into the mechanisms that determine their chemical and physical properties as a function of diameter, shape, surface termination, as well as to understand the mechanism of bulk formation. Due to the wide use of metal systems in our modern life, the accurate determination of the properties of 3d, 4d, and 5d metal clusters poses a huge problem for nanoscience. In this work, we report a density functional theory study of the atomic structure, binding energies, effective coordination numbers, average bond lengths, and magnetic properties of the 3d, 4d, and 5d metal (30 elements) clusters containing 13 atoms, M(13). First, a set of lowest-energy local minimum structures (as supported by vibrational analysis) were obtained by combining high-temperature first- principles molecular-dynamics simulation, structure crossover, and the selection of five well-known M(13) structures. Several new lower energy configurations were identified, e. g., Pd(13), W(13), Pt(13), etc., and previous known structures were confirmed by our calculations. Furthermore, the following trends were identified: (i) compact icosahedral-like forms at the beginning of each metal series, more opened structures such as hexagonal bilayerlike and double simple-cubic layers at the middle of each metal series, and structures with an increasing effective coordination number occur for large d states occupation. (ii) For Au(13), we found that spin-orbit coupling favors the three-dimensional (3D) structures, i.e., a 3D structure is about 0.10 eV lower in energy than the lowest energy known two-dimensional configuration. (iii) The magnetic exchange interactions play an important role for particular systems such as Fe, Cr, and Mn. (iv) The analysis of the binding energy and average bond lengths show a paraboliclike shape as a function of the occupation of the d states and hence, most of the properties can be explained by the chemistry picture of occupation of the bonding and antibonding states.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Context tree models have been introduced by Rissanen in [25] as a parsimonious generalization of Markov models. Since then, they have been widely used in applied probability and statistics. The present paper investigates non-asymptotic properties of two popular procedures of context tree estimation: Rissanen's algorithm Context and penalized maximum likelihood. First showing how they are related, we prove finite horizon bounds for the probability of over- and under-estimation. Concerning overestimation, no boundedness or loss-of-memory conditions are required: the proof relies on new deviation inequalities for empirical probabilities of independent interest. The under-estimation properties rely on classical hypotheses for processes of infinite memory. These results improve on and generalize the bounds obtained in Duarte et al. (2006) [12], Galves et al. (2008) [18], Galves and Leonardi (2008) [17], Leonardi (2010) [22], refining asymptotic results of Buhlmann and Wyner (1999) [4] and Csiszar and Talata (2006) [9]. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
The width of a closed convex subset of n-dimensional Euclidean space is the distance between two parallel supporting hyperplanes. The Blaschke-Lebesgue problem consists of minimizing the volume in the class of convex sets of fixed constant width and is still open in dimension n >= 3. In this paper we describe a necessary condition that the minimizer of the Blaschke-Lebesgue must satisfy in dimension n = 3: we prove that the smooth components of the boundary of the minimizer have their smaller principal curvature constant and therefore are either spherical caps or pieces of tubes (canal surfaces).
Resumo:
We consider the problem of interaction neighborhood estimation from the partial observation of a finite number of realizations of a random field. We introduce a model selection rule to choose estimators of conditional probabilities among natural candidates. Our main result is an oracle inequality satisfied by the resulting estimator. We use then this selection rule in a two-step procedure to evaluate the interacting neighborhoods. The selection rule selects a small prior set of possible interacting points and a cutting step remove from this prior set the irrelevant points. We also prove that the Ising models satisfy the assumptions of the main theorems, without restrictions on the temperature, on the structure of the interacting graph or on the range of the interactions. It provides therefore a large class of applications for our results. We give a computationally efficient procedure in these models. We finally show the practical efficiency of our approach in a simulation study.
Resumo:
A simultaneous optimization strategy based on a neuro-genetic approach is proposed for selection of laser induced breakdown spectroscopy operational conditions for the simultaneous determination of macronutrients (Ca, Mg and P), micro-nutrients (B, Cu, Fe, Mn and Zn), Al and Si in plant samples. A laser induced breakdown spectroscopy system equipped with a 10 Hz Q-switched Nd:YAG laser (12 ns, 532 nm, 140 mJ) and an Echelle spectrometer with intensified coupled-charge device was used. Integration time gate, delay time, amplification gain and number of pulses were optimized. Pellets of spinach leaves (NIST 1570a) were employed as laboratory samples. In order to find a model that could correlate laser induced breakdown spectroscopy operational conditions with compromised high peak areas of all elements simultaneously, a Bayesian Regularized Artificial Neural Network approach was employed. Subsequently, a genetic algorithm was applied to find optimal conditions for the neural network model, in an approach called neuro-genetic, A single laser induced breakdown spectroscopy working condition that maximizes peak areas of all elements simultaneously, was obtained with the following optimized parameters: 9.0 mu s integration time gate, 1.1 mu s delay time, 225 (a.u.) amplification gain and 30 accumulated laser pulses. The proposed approach is a useful and a suitable tool for the optimization process of such a complex analytical problem. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The first problem of the Seleucid mathematical cuneiform tablet BM 34 568 calculates the diagonal of a rectangle from its sides without resorting to the Pythagorean rule. For this reason, it has been a source of discussion among specialists ever since its first publication. but so far no consensus in relation to its mathematical meaning has been attained. This paper presents two new interpretations of the scribe`s procedure. based on the assumption that he was able to reduce the problem to a standard Mesopotamian question about reciprocal numbers. These new interpretations are then linked to interpretations of the Old Babylonian tablet Plimpton 322 and to the presence of Pythagorean triples in the contexts of Old Babylonian and Hellenistic mathematics. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Anthropometric characteristics and motor skills in talent selection and development in indoor soccer
Resumo:
Kick performance, anthropometric characteristics, slalom, and linear running were assessed in 49 (24 elite, 25 nonelite) postpubertal indoor soccer players in order to (a) verify whether anthropometric characteristics and physical and technical capacities can distinguish players of different competitive levels, (b) compare the kicking kinematics of these groups, with and without a defined target, and (c) compare results on the assessments and coaches` subjective rankings of the players. Thigh circumference and specific technical capacities differentiated the players by level of play; cluster analysis correctly classified 77.5% of the players. The correlation between players` standardized measures and the coaches` rankings was 0.29. Anthropometric characteristics and physical capacities do not necessarily differentiate players at post-pubertal stages and should not be overvalued during early development. Considering the coaches` rankings, performance measures outside the specific game conditions may not be useful in identification of talented players.
Resumo:
To evaluate the potential for fermentation of raspberry pulp, sixteen yeast strains (S. cerevisiae and S. bayanus) were studied. Volatile compounds were determined by GC-MS, GC-FID, and GC-PFPD. Ethanol. glycerol and organic acids were determined by HPLC. HPLC-DAD was used to analyse phenolic acids. Sensory analysis was performed by trained panellists. After a screening step, CAT-1, UFLA FW 15 and S. bayanus CBS 1505 were previously selected based on their fermentative characteristics and profile of the metabolites identified. The beverage produced with CAT-1 showed the highest volatile fatty acid concentration (1542.6 mu g/L), whereas the beverage produced with UFLA FIN 15 showed the highest concentration of acetates (2211.1 mu g/L) and total volatile compounds (5835 mu g/L). For volatile sulphur compounds. 566.5 mu g/L were found in the beverage produced with S. bayanus CBS 1505. The lowest concentration of volatile sulphur compounds (151.9 mu g/L) was found for the beverage produced with UFLA FW 15. In the sensory analysis, the beverage produced with UFLA FW 15 was characterised by the descriptors raspberry, cherry, sweet, strawberry, floral and violet. In conclusion, strain UFLA FW 15 was the yeast that produced a raspberry wine with a good chemical and sensory quality. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work presents a thermoeconomic optimization methodology for the analysis and design of energy systems. This methodology involves economic aspects related to the exergy conception, in order to develop a tool to assist the equipment selection, operation mode choice as well as to optimize the thermal plants design. It also presents the concepts related to exergy in a general scope and in thermoeconomics which combines the thermal sciences principles (thermodynamics, heat transfer, and fluid mechanics) and the economic engineering in order to rationalize energy systems investment decisions, development and operation. Even in this paper, it develops a thermoeconomic methodology through the use of a simple mathematical model, involving thermodynamics parameters and costs evaluation, also defining the objective function as the exergetic production cost. The optimization problem evaluation is developed for two energy systems. First is applied to a steam compression refrigeration system and then to a cogeneration system using backpressure steam turbine. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A combination of an extension of the topological instability ""lambda criterion"" and a thermodynamic criterion were applied to the Al-La system, indicating the best range of compositions for glass formation. Alloy compositions in this range were prepared by melt-spinning and casting in an arc-melting furnace with a wedge-section copper mold. The GFA of these samples was evaluated by X-ray diffraction, differential scanning calorimetry and scanning electron microscopy. The results indicated that the gamma* parameter of compositions with high GFA is higher, corresponding to a range in which the lambda parameter is greater than 0.1, which are compositions far from Al solid solution. A new alloy was identified with the best GFA reported so far for this system, showing a maximum thickness of 286 mu m in a wedge-section copper mold. Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
This paper addresses the time-variant reliability analysis of structures with random resistance or random system parameters. It deals with the problem of a random load process crossing a random barrier level. The implications of approximating the arrival rate of the first overload by an ensemble-crossing rate are studied. The error involved in this so-called ""ensemble-crossing rate"" approximation is described in terms of load process and barrier distribution parameters, and in terms of the number of load cycles. Existing results are reviewed, and significant improvements involving load process bandwidth, mean-crossing frequency and time are presented. The paper shows that the ensemble-crossing rate approximation can be accurate enough for problems where load process variance is large in comparison to barrier variance, but especially when the number of load cycles is small. This includes important practical applications like random vibration due to impact loadings and earthquake loading. Two application examples are presented, one involving earthquake loading and one involving a frame structure subject to wind and snow loadings. (C) 2007 Elsevier Ltd. All rights reserved.