975 resultados para winner determination problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thermal diffusion enrichment apparatus in use in Amsterdam before 1967, has been rebuilt in the Groningen Radiocarbon Dating Laboratory. It has been shown to operate reliably and reproducibly. A reasonable agreement exists between the theoretical calculations and the experimental results. The 14C enrichment of a CO sample is deduced from the simultaneous mass 30 enrichment, which is measured with a mass spectrometer. The relation between both enrichments follows from a series of calibration measurements. The over-all accuracy in the enrichment is a few percent, equivalent to a few hundred years in age. The main problem in dating very old samples is their possible contamination with recent carbon. Generally, careful sample selection and rigorous pretreatment reduce sample contamination to an acceptable value. Also, it has been established that laboratory contamination, due to a memory effect in the combustion system and to impurities in the oxygen and nitrogen gas used for combustion, can be eliminated. A detailed analysis shows that the counter background in our set-up is almost exclusively caused by cosmic ray muons. The measurement of 28 early glacial samples, mostly from North-west Europe, has yielded a consistent set of ages. These indicate the existence of three early glacial interstadials; using the Weichselian definitions: Amersfoort starting at 68 200 ± 1100, Brørup at 64 400 ± 800 and Odderade at 60 500 ± 600 years BP. This 14C chronology shows good agreement with the Camp Century chronology and the dated palaeo sea levels. The discrepancy in the age of the early part of the Last Glacial on the 14C time scale and on that adopted for the deep-sea d18 record, must probably be attributed to the use of a generalized d18 curve and a wrong interpretation of this curve in terms of three Barbados terraces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of national self-determination is a highly contested concept from very outset. It is partly due to its dual parentage, namely nationalism and liberalism. Prior to 1945 it was only a political concept without legal binding. With the incorporation of the principle in the UN Charter it was universalized and legalized. However, there were two competing interpretations at the UN based on de-colonization and representative government. How to define self and what really determined remain highly controversial. How to reconcile the international norm of sovereignty of state and self determination of people became more complex problem with the tide of secessionist movements based on ethno-nationalism. The concept of internal self-determination came as a compromise; but it is also very vague and harbors a wide range of interpretations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel methodology based on instrumented indentation is developed to determine the mechanical properties of amorphous materials which present cohesive-frictional behaviour. The approach is based on the concept of a universal hardness equation, which results from the assumption of a characteristic indentation pressure proportional to the hardness. The actual universal hardness equation is obtained from a detailed finite element analysis of the process of sharp indentation for a very wide range of material properties, and the inverse problem (i.e. how to extract the elastic modulus, the compressive yield strength and the friction angle) from instrumented indentation is solved. The applicability and limitations of the novel approach are highlighted. Finally, the model is validated against experimental data in metallic and ceramic glasses as well as polymers, covering a wide range of amorphous materials in terms of elastic modulus, yield strength and friction angle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a “signature” from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The arrangement of atoms at the surface of a solid accounts for many of its properties: Hardness, chemical activity, corrosion, etc. are dictated by the precise surface structure. Hence, finding it, has a broad range of technical and industrial applications. The ability to solve this problem opens the possibility of designing by computer materials with properties tailored to specific applications. Since the search space grows exponentially with the number of atoms, its solution cannot be achieved for arbitrarily large structures. Presently, a trial and error procedure is used: an expert proposes an structure as a candidate solution and tries a local optimization procedure on it. The solution relaxes to the local minimum in the attractor basin corresponding to the initial point, that might be the one corresponding to the global minimum or not. This procedure is very time consuming and, for reasonably sized surfaces, can take many iterations and much effort from the expert. Here we report on a visualization environment designed to steer this process in an attempt to solve bigger structures and reduce the time needed. The idea is to use an immersive environment to interact with the computation. It has immediate feedback to assess the quality of the proposed structure in order to let the expert explore the space of candidate solutions. The visualization environment is also able to communicate with the de facto local solver used for this problem. The user is then able to send trial structures to the local minimizer and track its progress as they approach the minimum. This allows for simultaneous testing of candidate structures. The system has also proved very useful as an educational tool for the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limit equilibrium is a common method used to analyze the stability of a slope, and minimization of the factor of safety or identification of critical slip surfaces is a classical geotechnical problem in the context of limit equilibrium methods for slope stability analyses. A mutative scale chaos optimization algorithm is employed in this study to locate the noncircular critical slip surface with Spencer’s method being employed to compute the factor of safety. Four examples from the literature—one homogeneous slope and three layered slopes—are employed to identify the efficiency and accuracy of this approach. Results indicate that the algorithm is flexible and that although it does not generally provide the minimum FS, it provides results that are close to the minimum, an improvement over other solutions proposed in the literature and with small relative errors with respect to other minimum factor of safety (FS) values reported in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existe normalmente el propósito de obtener la mejor solución posible cuando se plantea un problema estructural, entendiendo como mejor la solución que cumpliendo los requisitos estructurales, de uso, etc., tiene un coste físico menor. En una primera aproximación se puede representar el coste físico por medio del peso propio de la estructura, lo que permite plantear la búsqueda de la mejor solución como la de menor peso. Desde un punto de vista práctico, la obtención de buenas soluciones—es decir, soluciones cuyo coste sea solo ligeramente mayor que el de la mejor solución— es una tarea tan importante como la obtención de óptimos absolutos, algo en general difícilmente abordable. Para disponer de una medida de la eficiencia que haga posible la comparación entre soluciones se propone la siguiente definición de rendimiento estructural: la razón entre la carga útil que hay que soportar y la carga total que hay que contabilizar (la suma de la carga útil y el peso propio). La forma estructural puede considerarse compuesta por cuatro conceptos, que junto con el material, definen una estructura: tamaño, esquema, proporción, y grueso.Galileo (1638) propuso la existencia de un tamaño insuperable para cada problema estructural— el tamaño para el que el peso propio agota una estructura para un esquema y proporción dados—. Dicho tamaño, o alcance estructural, será distinto para cada material utilizado; la única información necesaria del material para su determinación es la razón entre su resistencia y su peso especifico, una magnitud a la que denominamos alcance del material. En estructuras de tamaño muy pequeño en relación con su alcance estructural la anterior definición de rendimiento es inútil. En este caso —estructuras de “talla nula” en las que el peso propio es despreciable frente a la carga útil— se propone como medida del coste la magnitud adimensional que denominamos número de Michell, que se deriva de la “cantidad” introducida por A. G. M. Michell en su artículo seminal de 1904, desarrollado a partir de un lema de J. C. Maxwell de 1870. A finales del siglo pasado, R. Aroca combino las teorías de Galileo y de Maxwell y Michell, proponiendo una regla de diseño de fácil aplicación (regla GA), que permite la estimación del alcance y del rendimiento de una forma estructural. En el presente trabajo se estudia la eficiencia de estructuras trianguladas en problemas estructurales de flexión, teniendo en cuenta la influencia del tamaño. Por un lado, en el caso de estructuras de tamaño nulo se exploran esquemas cercanos al optimo mediante diversos métodos de minoración, con el objetivo de obtener formas cuyo coste (medido con su numero deMichell) sea muy próximo al del optimo absoluto pero obteniendo una reducción importante de su complejidad. Por otro lado, se presenta un método para determinar el alcance estructural de estructuras trianguladas (teniendo en cuenta el efecto local de las flexiones en los elementos de dichas estructuras), comparando su resultado con el obtenido al aplicar la regla GA, mostrando las condiciones en las que es de aplicación. Por último se identifican las líneas de investigación futura: la medida de la complejidad; la contabilidad del coste de las cimentaciones y la extensión de los métodos de minoración cuando se tiene en cuenta el peso propio. ABSTRACT When a structural problem is posed, the intention is usually to obtain the best solution, understanding this as the solution that fulfilling the different requirements: structural, use, etc., has the lowest physical cost. In a first approximation, the physical cost can be represented by the self-weight of the structure; this allows to consider the search of the best solution as the one with the lowest self-weight. But, from a practical point of view, obtaining good solutions—i.e. solutions with higher although comparable physical cost than the optimum— can be as important as finding the optimal ones, because this is, generally, a not affordable task. In order to have a measure of the efficiency that allows the comparison between different solutions, a definition of structural efficiency is proposed: the ratio between the useful load and the total load —i.e. the useful load plus the self-weight resulting of the structural sizing—. The structural form can be considered to be formed by four concepts, which together with its material, completely define a particular structure. These are: Size, Schema, Slenderness or Proportion, and Thickness. Galileo (1638) postulated the existence of an insurmountable size for structural problems—the size for which a structure with a given schema and a given slenderness, is only able to resist its self-weight—. Such size, or structural scope will be different for every different used material; the only needed information about the material to determine such size is the ratio between its allowable stress and its specific weight: a characteristic length that we name material structural scope. The definition of efficiency given above is not useful for structures that have a small size in comparison with the insurmountable size. In this case—structures with null size, inwhich the self-weight is negligible in comparisonwith the useful load—we use as measure of the cost the dimensionless magnitude that we call Michell’s number, an amount derived from the “quantity” introduced by A. G. M. Michell in his seminal article published in 1904, developed out of a result from J. C.Maxwell of 1870. R. Aroca joined the theories of Galileo and the theories of Maxwell and Michell, obtaining some design rules of direct application (that we denominate “GA rule”), that allow the estimation of the structural scope and the efficiency of a structural schema. In this work the efficiency of truss-like structures resolving bending problems is studied, taking into consideration the influence of the size. On the one hand, in the case of structures with null size, near-optimal layouts are explored using several minimization methods, in order to obtain forms with cost near to the absolute optimum but with a significant reduction of the complexity. On the other hand, a method for the determination of the insurmountable size for truss-like structures is shown, having into account local bending effects. The results are checked with the GA rule, showing the conditions in which it is applicable. Finally, some directions for future research are proposed: the measure of the complexity, the cost of foundations and the extension of optimization methods having into account the self-weight.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Mindlin plate with periodically distributed ribs patterns is analyzed by using homogenization techniques based on asymptotic expansion methods. The stiffness matrix of the homogenized plate is found to be dependent on the geometrical characteristics of the periodical cell, i.e. its skewness, plan shape, thickness variation etc. and on the plate material elastic constants. The computation of this plate stiffness matrix is carried out by averaging over the cell domain some solutions of different periodical boundary value problems. These boundary value problems are defined in variational form by linear first order differential operators on the cell domain and the boundary conditions of the variational equation correspond to a periodic structural problem. The elements of the stiffness matrix of homogenized plate are obtained by linear combinations of the averaged solution functions of the above mentioned boundary value problems. Finally, an illustrative example of application of this homogenization technique to hollowed plates and plate structures with ribs patterns regularly arranged over its area is shown. The possibility of using in the profesional practice the present procedure to the actual analysis of floors of typical buildings is also emphasized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of the key of a musical passage is a pre-requisite for all the analyses that require functional labelling. In the past, people from either a musical or AI background have tended to solve the problem by means of implementing a computerized version of musical analysis. Previous attempts are discussed and then attention is focused on a non-analytical solution first reported by J.A.Gabura. A practical way to carry it out is discussed as well as its limitations in relation to examples. References are made to the MusicXML format as needed. © Springer-Verlag Berlin Heidelberg 2006.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research began with an attempt to solve a practical problem, namely, the prediction of the rate at which an operator will learn a task. From a review of the literature, communications with researchers in this area and the study of psychomotor learning in factories it was concluded that a more fundamental approach was required which included the development of a task taxonomy. This latter objective had been researched for over twenty years by E. A. Fleishman and his approach was adopted. Three studies were carried out to develop and extend Fleishman's approach to the industrial area. However, the results of these studies were not in accord with FIeishman's conclusions and suggested that a critical re-assessment was required of the arguments, methods and procedures used by Fleishman and his co-workers. It was concluded that Fleishman's findings were to some extent an artifact of the approximate methods and procedures which he used in the original factor analyses and that using the more modern computerised factor analytic methods a reliable ability taxonomy could be developed to describe the abilities involved in the learning of psychomotor tasks. The implications for a changing-task or changing-subject model were drawn and it was concluded that a changing task and subject model needs to be developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the problem of determining the stationary temperature field on an inclusion from given Cauchy data on an accessible exterior boundary. On this accessible part the temperature (or the heat flux) is known, and, additionally, on a portion of this exterior boundary the heat flux (or temperature) is also given. We propose a direct boundary integral approach in combination with Tikhonov regularization for the stable determination of the temperature and flux on the inclusion. To determine these quantities on the inclusion, boundary integral equations are derived using Green’s functions, and properties of these equations are shown in an L2-setting. An effective way of discretizing these boundary integral equations based on the Nystr¨om method and trigonometric approximations, is outlined. Numerical examples are included, both with exact and noisy data, showing that accurate approximations can be obtained with small computational effort, and the accuracy is increasing with the length of the portion of the boundary where the additionally data is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the inverse problem of determining a spacewise dependent heat source in the parabolic heat equation using the usual conditions of the direct problem and information from a supplementary temperature measurement at a given single instant of time. The spacewise dependent temperature measurement ensures that the inverse problem has a unique solution, but this solution is unstable, hence the problem is ill-posed. For this inverse problem, we propose an iterative algorithm based on a sequence of well-posed direct problems which are solved at each iteration step using the boundary element method (BEM). The instability is overcome by stopping the iterations at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented for various typical benchmark test examples which have the input measured data perturbed by increasing amounts of random noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An automated cognitive approach for the design of Information Systems is presented. It is supposed to be used at the very beginning of the design process, between the stages of requirements determination and analysis, including the stage of analysis. In the context of the approach used either UML or ERD notations may be used for model representation. The approach provides the opportunity of using natural language text documents as a source of knowledge for automated problem domain model generation. It also simplifies the process of modelling by assisting the human user during the whole period of working upon the model (using UML or ERD notations).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneity of labour and its implications for the Marxian theory of value has been one of the most controversial issues in the literature of the Marxist political economy. The adoption of Marx's conjecture about a uniform rate of surplus value leads to a simultaneous determination of the values of common and labour commodities of different types and the uniform rate of surplus value. Determination of these variables can be formally represented as a parametric cigenvalue problem. Morishima's and Bródy's earlier results are analysed and given new interpretations in the light of the suggested procedure. The main questions are addressed in a more general context too. The analysis is extended to the problem of segmented labour market, as well.