992 resultados para Lot sizing problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article focuses on problem solving activities in a first grade classroom in a typical small community and school in Indiana. But, the teacher and the activities in this class were not at all typical of what goes on in most comparable classrooms; and, the issues that will be addressed are relevant and important for students from kindergarten through college. Can children really solve problems that involve concepts (or skills) that they have not yet been taught? Can children really create important mathematical concepts on their own – without a lot of guidance from teachers? What is the relationship between problem solving abilities and the mastery of skills that are widely regarded as being “prerequisites” to such tasks?Can primary school children (whose toolkits of skills are limited) engage productively in authentic simulations of “real life” problem solving situations? Can three-person teams of primary school children really work together collaboratively, and remain intensely engaged, on problem solving activities that require more than an hour to complete? Are the kinds of learning and problem solving experiences that are recommended (for example) in the USA’s Common Core State Curriculum Standards really representative of the kind that even young children encounter beyond school in the 21st century? … This article offers an existence proof showing why our answers to these questions are: Yes. Yes. Yes. Yes. Yes. Yes. And: No. … Even though the evidence we present is only intended to demonstrate what’s possible, not what’s likely to occur under any circumstances, there is no reason to expect that the things that our children accomplished could not be accomplished by average ability children in other schools and classrooms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A low thermal diffusivity of adsorption beds induces a large thermal gradient across cylindrical adsorbers used in adsorption cooling cycles. This reduces the concentration difference across which a thermal compressor operates. Slow adsorption kinetics in conjunction with the void volume effect further diminishes throughputs from those adsorption thermal compressors. The problem can be partially alleviated by increasing the desorption temperatures. The theme of this paper is the determination the minimum desorption temperature required for a given set of evaporating/condensing temperatures for an activated carbon + HFC 134a adsorption cooler. The calculation scheme is validated from experimental data. Results from a parametric analysis covering a range of evaporating/condensing/desorption temperatures are presented. It is found that the overall uptake efficiency and Carnot COP characterize these bounds. A design methodology for adsorber sizing is evolved. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis contributes to the heuristic optimization of the p-median problem and Swedish population redistribution.   The p-median model is the most representative model in the location analysis. When facilities are located to a population geographically distributed in Q demand points, the p-median model systematically considers all the demand points such that each demand point will have an effect on the decision of the location. However, a series of questions arise. How do we measure the distances? Does the number of facilities to be located have a strong impact on the result? What scale of the network is suitable? How good is our solution? We have scrutinized a lot of issues like those. The reason why we are interested in those questions is that there are a lot of uncertainties in the solutions. We cannot guarantee our solution is good enough for making decisions. The technique of heuristic optimization is formulated in the thesis.   Swedish population redistribution is examined by a spatio-temporal covariance model. A descriptive analysis is not always enough to describe the moving effects from the neighbouring population. A correlation or a covariance analysis is more explicit to show the tendencies. Similarly, the optimization technique of the parameter estimation is required and is executed in the frame of statistical modeling. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contemporary feminism has, from its inception, been ambivalent in its responses to the issue of women in management. On the one hand, feminists have recognised as a problem the limited numbers of women in management and the barriers that they encounter. They have promoted the development of programs such as affirmative action with, arguably, greater, or lesser success. At the same time, there has been a reluctance by some feminists to attach too much importance to the issue, given the manifestly more severe forms of discrimination encountered by other groups of women. According to this view, the problems of a privileged elite are a lesser priority, that is, marginal to more pressing feminist concerns.

This paper is based on research into career success predictors. It draws on work on culture and models of change in higher education to show that while interventions such as legislation granting maternity leave are significant initiatives to be strongly supported, the impact of such policies is mediated by the social rules of the organisation. These rules are a corollary of enduring value structures which are embedded in organisational cultures.

Research findings showed that the value systems, and especially the social rules which operate within organisations impact on men and women's career success differently. This research provides valuable insights into the mechanisms operating at several levels (at the organisational level as well as at the level of individual women) which tend to construct women as marginal in management.

Seeking to understand the marginality experienced by women in management has benefits that extend well beyond improving the lot of individual women managers. This is because better conceptualisations of marginality and, concomitantly, power in organisations can provide leverage for more far reaching changes for women generally.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Correspondence estimation in one of the most active research areas in the field of computer vision and number of techniques has been proposed, possessing both advantages and shortcomings. Among the techniques reported, multiresolution analysis based stereo correspondence estimation has gained lot of research focus in recent years. Although, the most widely employed medium for multiresolution analysis is wavelets and multiwavelets bases, however, relatively little work has been reported in this context. In this work we have tried to address some of the issues regarding the work done in this domain and the inherited shortcomings. In the light of these shortcomings, we propose a new technique to overcome some of the flaws that could have significantly impact on the algorithm performance and has not been addressed in the earlier propositions. Proposed algorithm uses multiresolution analysis enforced with wavelets/multiwavelts transform modulus maxima to establish correspondences between the stereo pair of images. Variety of wavelets and multiwavelets bases, possessing distinct properties such as orthogonality, approximation order, short support and shape are employed to analyse their effect on the performance of correspondence estimation. The idea is to provide knowledge base to understand and establish relationships between wavelets and multiwavelets properties and their effect on the quality of stereo correspondence estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As in the standard land assembly problem, a developer wants to buy two adjacent blocks of land belonging to two di¤erent owners. The value of the two blocks of land to the developer is greater than the sum of the individual values of the blocks for each owner. Unlike the land assembly literature, however, our focus is on the incentive that each lot owner has to delay the start of negotiations, rather than on the public goods nature of the problem. An incentive for delay exists, for example, when owners perceive that being last to sell will allow them to capture a larger share of the joint surplus from the development. We show that competition at point of sale can cause equilibrium delay, and that cooperation at point of sale will eliminate delay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A prática de saúde no país passa por grandes e difíceis desafios. No que tange a financiamento o setor público ainda carece de um melhor dimensionamento e proposta estratégica. Já o setor privado tem sua crise pela falta de maios análise dos seus custos e suas variações. A regulação do setor suplementar ainda é muito jovem. Os grandes desafios da saúde e a busca de solução são uma constante para melhor dimensionar e gerir o processo. Vários são os fatores pressionam o modelo aumentando seus gastos e ainda em uma situação mais perversa sem nenhuma previsão. Há a necessidade urgente de se entender o modelo de gastos envolvidos e acima de tudo os fatores que mais interferem na variação desses gastos. O trabalho que segue busca entender melhor o problema de variação de custos destes serviços prestados, através da análise de três casos médicos de alta relevância: apendicectomia; histerectomia; e colecistectomia, observando a variação dos preços praticados em uma série histórica de 5 anos, comparando com índices econômicos e inflacionários como IPCA, variação dos preços dos planos de saúde pela ANS e variação dos custos médicos hospitalares. Como resultados observamos que para estes eventos clássicos e isolados eventos não há um padrão nítido de preços atrelados a qualquer destes índices, nem uma variação linear que permita uma maior e melhor análise. Ao mesmo tempo quando se aumenta o volume dos atendimentos, no mix dos três eventos passamos a observar uma relação bem próxima ao IPCA que por sua vez é muito próxima do índice de reajustes autorizado pela ANS no período. Muito há o que ser feito e estudado como forma de melhor entender este modelo de preços e custos, bem como suas variações.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a methodology for the placement and sizing evaluation of distributed generation (DG) in electric power systems. The candidate locations for DG placement are identified on the bases of Locational Marginal Prices (LMP's) obtained from an optimal power flow solution. The problem is formulated for two different objectives: social welfare maximization and profit maximization. For each DG unit an optimal placement is identified for each of the objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lack of data records of electric power consumption of smallphotovoltaic home systems, independently of the method used for sizing them, drives to consider the demand as a constant. However, the existing data reveal the variability of the consumption due to the influences of some social, cultural and psychosocial aspects of the human groups. This paper presents records of consumption data obtainedfrom several solar home systems (SHSs) in Brazil and Peru, and it discusses about the Gamma distribution function that can express to a great extent the behaviour of the demand. By this analysis it was verified that `a lot of people consume little and few people consume a lot`. In that sense, a few recommendations for sizing procedures that can be useful in the implantation of extensive programmes of rural electrification by SHSs are presented. Copyright (c) 2007 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

More than eighteen percent of the world’s population lives without reliable access to clean water, forced to walk long distances to get small amounts of contaminated surface water. Carrying heavy loads of water long distances and ingesting contaminated water can lead to long-term health problems and even death. These problems affect the most vulnerable populations, women, children, and the elderly, more than anyone else. Water access is one of the most pressing issues in development today. Boajibu, a small village in Sierra Leone, where the author served in Peace Corps for two years, lacks access to clean water. Construction of a water distribution system was halted when a civil war broke out in 1992 and has not been continued since. The community currently relies on hand-dug and borehole wells that can become dirty during the dry season, which forces people to drink contaminated water or to travel a far distance to collect clean water. This report is intended to provide a design the system as it was meant to be built. The water system design was completed based on the taps present, interviews with local community leaders, local surveying, and points taken with a GPS. The design is a gravity-fed branched water system, supplied by a natural spring on a hill adjacent to Boajibu. The system’s source is a natural spring on a hill above Boajibu, but the flow rate of the spring is unknown. There has to be enough flow from the spring over a 24-hour period to meet the demands of the users on a daily basis, or what is called providing continuous flow. If the spring has less than this amount of flow, the system must provide intermittent flow, flow that is restricted to a few hours a day. A minimum flow rate of 2.1 liters per second was found to be necessary to provide continuous flow to the users of Boajibu. If this flow is not met, intermittent flow can be provided to the users. In order to aid the construction of a distribution system in the absence of someone with formal engineering training, a table was created detailing water storage tank sizing based on possible source flow rates. A builder can interpolate using the source flow rate found to get the tank size from the table. However, any flow rate below 2.1 liters per second cannot be used in the table. In this case, the builder should size the tank such that it can take in the water that will be supplied overnight, as all the water will be drained during the day because the users will demand more than the spring can supply through the night. In the developing world, there is often a problem collecting enough money to fund large infrastructure projects, such as a water distribution system. Often there is only enough money to add only one or two loops to a water distribution system. It is helpful to know where these one or two loops can be most effectively placed in the system. Various possible loops were designated for the Boajibu water distribution system and the Adaptive Greedy Heuristic Loop Addition Selection Algorithm (AGHLASA) was used to rank the effectiveness of the possible loops to construct. Loop 1 which was furthest upstream was selected because it benefitted the most people for the least cost. While loops which were further downstream were found to be less effective because they would benefit fewer people. Further studies should be conducted on the water use habits of the people of Boajibu to more accurately predict the demands that will be placed on the system. Further population surveying should also be conducted to predict population change over time so that the appropriate capacity can be built into the system to accommodate future growth. The flow at the spring should be measured using a V-notch weir and the system adjusted accordingly. Future studies can be completed adjusting the loop ranking method so that two users who may be using the water system for different lengths of time are not counted the same and vulnerable users are weighted more heavily than more robust users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developers attempting land assembly often face a potential holdout problem that raises the cost of development. To minimize this extra cost, developers will prefer land whose ownership is less dispersed. This creates a bias toward development at the urban fringe where average lot sizes are larger, resulting in urban sprawl. This paper examines the link between the holdout problem and urban sprawl and discusses possible remedies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measuring student learning through standardized tests is a lot harder than modern education reformers would have you believe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existe normalmente el propósito de obtener la mejor solución posible cuando se plantea un problema estructural, entendiendo como mejor la solución que cumpliendo los requisitos estructurales, de uso, etc., tiene un coste físico menor. En una primera aproximación se puede representar el coste físico por medio del peso propio de la estructura, lo que permite plantear la búsqueda de la mejor solución como la de menor peso. Desde un punto de vista práctico, la obtención de buenas soluciones—es decir, soluciones cuyo coste sea solo ligeramente mayor que el de la mejor solución— es una tarea tan importante como la obtención de óptimos absolutos, algo en general difícilmente abordable. Para disponer de una medida de la eficiencia que haga posible la comparación entre soluciones se propone la siguiente definición de rendimiento estructural: la razón entre la carga útil que hay que soportar y la carga total que hay que contabilizar (la suma de la carga útil y el peso propio). La forma estructural puede considerarse compuesta por cuatro conceptos, que junto con el material, definen una estructura: tamaño, esquema, proporción, y grueso.Galileo (1638) propuso la existencia de un tamaño insuperable para cada problema estructural— el tamaño para el que el peso propio agota una estructura para un esquema y proporción dados—. Dicho tamaño, o alcance estructural, será distinto para cada material utilizado; la única información necesaria del material para su determinación es la razón entre su resistencia y su peso especifico, una magnitud a la que denominamos alcance del material. En estructuras de tamaño muy pequeño en relación con su alcance estructural la anterior definición de rendimiento es inútil. En este caso —estructuras de “talla nula” en las que el peso propio es despreciable frente a la carga útil— se propone como medida del coste la magnitud adimensional que denominamos número de Michell, que se deriva de la “cantidad” introducida por A. G. M. Michell en su artículo seminal de 1904, desarrollado a partir de un lema de J. C. Maxwell de 1870. A finales del siglo pasado, R. Aroca combino las teorías de Galileo y de Maxwell y Michell, proponiendo una regla de diseño de fácil aplicación (regla GA), que permite la estimación del alcance y del rendimiento de una forma estructural. En el presente trabajo se estudia la eficiencia de estructuras trianguladas en problemas estructurales de flexión, teniendo en cuenta la influencia del tamaño. Por un lado, en el caso de estructuras de tamaño nulo se exploran esquemas cercanos al optimo mediante diversos métodos de minoración, con el objetivo de obtener formas cuyo coste (medido con su numero deMichell) sea muy próximo al del optimo absoluto pero obteniendo una reducción importante de su complejidad. Por otro lado, se presenta un método para determinar el alcance estructural de estructuras trianguladas (teniendo en cuenta el efecto local de las flexiones en los elementos de dichas estructuras), comparando su resultado con el obtenido al aplicar la regla GA, mostrando las condiciones en las que es de aplicación. Por último se identifican las líneas de investigación futura: la medida de la complejidad; la contabilidad del coste de las cimentaciones y la extensión de los métodos de minoración cuando se tiene en cuenta el peso propio. ABSTRACT When a structural problem is posed, the intention is usually to obtain the best solution, understanding this as the solution that fulfilling the different requirements: structural, use, etc., has the lowest physical cost. In a first approximation, the physical cost can be represented by the self-weight of the structure; this allows to consider the search of the best solution as the one with the lowest self-weight. But, from a practical point of view, obtaining good solutions—i.e. solutions with higher although comparable physical cost than the optimum— can be as important as finding the optimal ones, because this is, generally, a not affordable task. In order to have a measure of the efficiency that allows the comparison between different solutions, a definition of structural efficiency is proposed: the ratio between the useful load and the total load —i.e. the useful load plus the self-weight resulting of the structural sizing—. The structural form can be considered to be formed by four concepts, which together with its material, completely define a particular structure. These are: Size, Schema, Slenderness or Proportion, and Thickness. Galileo (1638) postulated the existence of an insurmountable size for structural problems—the size for which a structure with a given schema and a given slenderness, is only able to resist its self-weight—. Such size, or structural scope will be different for every different used material; the only needed information about the material to determine such size is the ratio between its allowable stress and its specific weight: a characteristic length that we name material structural scope. The definition of efficiency given above is not useful for structures that have a small size in comparison with the insurmountable size. In this case—structures with null size, inwhich the self-weight is negligible in comparisonwith the useful load—we use as measure of the cost the dimensionless magnitude that we call Michell’s number, an amount derived from the “quantity” introduced by A. G. M. Michell in his seminal article published in 1904, developed out of a result from J. C.Maxwell of 1870. R. Aroca joined the theories of Galileo and the theories of Maxwell and Michell, obtaining some design rules of direct application (that we denominate “GA rule”), that allow the estimation of the structural scope and the efficiency of a structural schema. In this work the efficiency of truss-like structures resolving bending problems is studied, taking into consideration the influence of the size. On the one hand, in the case of structures with null size, near-optimal layouts are explored using several minimization methods, in order to obtain forms with cost near to the absolute optimum but with a significant reduction of the complexity. On the other hand, a method for the determination of the insurmountable size for truss-like structures is shown, having into account local bending effects. The results are checked with the GA rule, showing the conditions in which it is applicable. Finally, some directions for future research are proposed: the measure of the complexity, the cost of foundations and the extension of optimization methods having into account the self-weight.