947 resultados para model efficiency


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CENTURY soil organic matter model was adapted for the DSSAT (Decision Support System for Agrotechnology Transfer), modular format in order to better simulate the dynamics of soil organic nutrient processes (Gijsman et al., 2002). The CENTURY model divides the soil organic carbon (SOC) into three hypothetical pools: microbial or active material (SOC1), intermediate (SOC2) and the largely inert and stable material (SOC3) (Jones et al., 2003). At the beginning of the simulation, CENTURY model needs a value of SOC3 per soil layer which can be estimated by the model (based on soil texture and management history) or given as an input. Then, the model assigns about 5% and 95% of the remaining SOC to SOC1 and SOC2, respectively. The model performance when simulating SOC and nitrogen (N) dynamics strongly depends on the initialization process. The common methods (e.g. Basso et al., 2011) to initialize SOC pools deal mostly with carbon (C) mineralization processes and less with N. Dynamics of SOM, SOC, and soil organic N are linked in the CENTURY-DSSAT model through the C/N ratio of decomposing material that determines either mineralization or immobilization of N (Gijsman et al., 2002). The aim of this study was to evaluate an alternative method to initialize the SOC pools in the DSSAT-CENTURY model from apparent soil N mineralization (Napmin) field measurements by using automatic inverse calibration (simulated annealing). The results were compared with the ones obtained by the iterative initialization procedure developed by Basso et al., 2011.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El actual contexto de fabricación, con incrementos en los precios de la energía, una creciente preocupación medioambiental y cambios continuos en los comportamientos de los consumidores, fomenta que los responsables prioricen la fabricación respetuosa con el medioambiente. El paradigma del Internet de las Cosas (IoT) promete incrementar la visibilidad y la atención prestada al consumo de energía gracias tanto a sensores como a medidores inteligentes en los niveles de máquina y de línea de producción. En consecuencia es posible y sencillo obtener datos de consumo de energía en tiempo real proveniente de los procesos de fabricación, pero además es posible analizarlos para incrementar su importancia en la toma de decisiones. Esta tesis pretende investigar cómo utilizar la adopción del Internet de las Cosas en el nivel de planta de producción, en procesos discretos, para incrementar la capacidad de uso de la información proveniente tanto de la energía como de la eficiencia energética. Para alcanzar este objetivo general, la investigación se ha dividido en cuatro sub-objetivos y la misma se ha desarrollado a lo largo de cuatro fases principales (en adelante estudios). El primer estudio de esta tesis, que se apoya sobre una revisión bibliográfica comprehensiva y sobre las aportaciones de expertos, define prácticas de gestión de la producción que son energéticamente eficientes y que se apoyan de un modo preeminente en la tecnología IoT. Este primer estudio también detalla los beneficios esperables al adoptar estas prácticas de gestión. Además, propugna un marco de referencia para permitir la integración de los datos que sobre el consumo energético se obtienen en el marco de las plataformas y sistemas de información de la compañía. Esto se lleva a cabo con el objetivo último de remarcar cómo estos datos pueden ser utilizados para apalancar decisiones en los niveles de procesos tanto tácticos como operativos. Segundo, considerando los precios de la energía como variables en el mercado intradiario y la disponibilidad de información detallada sobre el estado de las máquinas desde el punto de vista de consumo energético, el segundo estudio propone un modelo matemático para minimizar los costes del consumo de energía para la programación de asignaciones de una única máquina que deba atender a varios procesos de producción. Este modelo permite la toma de decisiones en el nivel de máquina para determinar los instantes de lanzamiento de cada trabajo de producción, los tiempos muertos, cuándo la máquina debe ser puesta en un estado de apagada, el momento adecuado para rearrancar, y para pararse, etc. Así, este modelo habilita al responsable de producción de implementar el esquema de producción menos costoso para cada turno de producción. En el tercer estudio esta investigación proporciona una metodología para ayudar a los responsables a implementar IoT en el nivel de los sistemas productivos. Se incluye un análisis del estado en que se encuentran los sistemas de gestión de energía y de producción en la factoría, así como también se proporcionan recomendaciones sobre procedimientos para implementar IoT para capturar y analizar los datos de consumo. Esta metodología ha sido validada en un estudio piloto, donde algunos indicadores clave de rendimiento (KPIs) han sido empleados para determinar la eficiencia energética. En el cuarto estudio el objetivo es introducir una vía para obtener visibilidad y relevancia a diferentes niveles de la energía consumida en los procesos de producción. El método propuesto permite que las factorías con procesos de producción discretos puedan determinar la energía consumida, el CO2 emitido o el coste de la energía consumida ya sea en cualquiera de los niveles: operación, producto o la orden de fabricación completa, siempre considerando las diferentes fuentes de energía y las fluctuaciones en los precios de la misma. Los resultados muestran que decisiones y prácticas de gestión para conseguir sistemas de producción energéticamente eficientes son posibles en virtud del Internet de las Cosas. También, con los resultados de esta tesis los responsables de la gestión energética en las compañías pueden plantearse una aproximación a la utilización del IoT desde un punto de vista de la obtención de beneficios, abordando aquellas prácticas de gestión energética que se encuentran más próximas al nivel de madurez de la factoría, a sus objetivos, al tipo de producción que desarrolla, etc. Así mismo esta tesis muestra que es posible obtener reducciones significativas de coste simplemente evitando los períodos de pico diario en el precio de la misma. Además la tesis permite identificar cómo el nivel de monitorización del consumo energético (es decir al nivel de máquina), el intervalo temporal, y el nivel del análisis de los datos son factores determinantes a la hora de localizar oportunidades para mejorar la eficiencia energética. Adicionalmente, la integración de datos de consumo energético en tiempo real con datos de producción (cuando existen altos niveles de estandarización en los procesos productivos y sus datos) es esencial para permitir que las factorías detallen la energía efectivamente consumida, su coste y CO2 emitido durante la producción de un producto o componente. Esto permite obtener una valiosa información a los gestores en el nivel decisor de la factoría así como a los consumidores y reguladores. ABSTRACT In today‘s manufacturing scenario, rising energy prices, increasing ecological awareness, and changing consumer behaviors are driving decision makers to prioritize green manufacturing. The Internet of Things (IoT) paradigm promises to increase the visibility and awareness of energy consumption, thanks to smart sensors and smart meters at the machine and production line level. Consequently, real-time energy consumption data from the manufacturing processes can be easily collected and then analyzed, to improve energy-aware decision-making. This thesis aims to investigate how to utilize the adoption of the Internet of Things at shop floor level to increase energy–awareness and the energy efficiency of discrete production processes. In order to achieve the main research goal, the research is divided into four sub-objectives, and is accomplished during four main phases (i.e., studies). In the first study, by relying on a comprehensive literature review and on experts‘ insights, the thesis defines energy-efficient production management practices that are enhanced and enabled by IoT technology. The first study also explains the benefits that can be obtained by adopting such management practices. Furthermore, it presents a framework to support the integration of gathered energy data into a company‘s information technology tools and platforms, which is done with the ultimate goal of highlighting how operational and tactical decision-making processes could leverage such data in order to improve energy efficiency. Considering the variable energy prices in one day, along with the availability of detailed machine status energy data, the second study proposes a mathematical model to minimize energy consumption costs for single machine production scheduling during production processes. This model works by making decisions at the machine level to determine the launch times for job processing, idle time, when the machine must be shut down, ―turning on‖ time, and ―turning off‖ time. This model enables the operations manager to implement the least expensive production schedule during a production shift. In the third study, the research provides a methodology to help managers implement the IoT at the production system level; it includes an analysis of current energy management and production systems at the factory, and recommends procedures for implementing the IoT to collect and analyze energy data. The methodology has been validated by a pilot study, where energy KPIs have been used to evaluate energy efficiency. In the fourth study, the goal is to introduce a way to achieve multi-level awareness of the energy consumed during production processes. The proposed method enables discrete factories to specify energy consumption, CO2 emissions, and the cost of the energy consumed at operation, production and order levels, while considering energy sources and fluctuations in energy prices. The results show that energy-efficient production management practices and decisions can be enhanced and enabled by the IoT. With the outcomes of the thesis, energy managers can approach the IoT adoption in a benefit-driven way, by addressing energy management practices that are close to the maturity level of the factory, target, production type, etc. The thesis also shows that significant reductions in energy costs can be achieved by avoiding high-energy price periods in a day. Furthermore, the thesis determines the level of monitoring energy consumption (i.e., machine level), the interval time, and the level of energy data analysis, which are all important factors involved in finding opportunities to improve energy efficiency. Eventually, integrating real-time energy data with production data (when there are high levels of production process standardization data) is essential to enable factories to specify the amount and cost of energy consumed, as well as the CO2 emitted while producing a product, providing valuable information to decision makers at the factory level as well as to consumers and regulators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayoría de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar características de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologías que introducen el mínimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en día, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on the parallelization of an ocean model applying current multicore processor-based cluster architectures to an irregular computational mesh. The aim is to maximize the efficiency of the computational resources used. To make the best use of the resources offered by these architectures, this parallelization has been addressed at all the hardware levels of modern supercomputers: firstly, exploiting the internal parallelism of the CPU through vectorization; secondly, taking advantage of the multiple cores of each node using OpenMP; and finally, using the cluster nodes to distribute the computational mesh, using MPI for communication within the nodes. The speedup obtained with each parallelization technique as well as the combined overall speedup have been measured for the western Mediterranean Sea for different cluster configurations, achieving a speedup factor of 73.3 using 256 processors. The results also show the efficiency achieved in the different cluster nodes and the advantages obtained by combining OpenMP and MPI versus using only OpenMP or MPI. Finally, the scalability of the model has been analysed by examining computation and communication times as well as the communication and synchronization overhead due to parallelization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new model for characterizing the energetic behavior of grid connected PV inverters. The model has been obtained from a detailed study of main loss processes in small size PV inverters in the market. The main advantage of the used method is to obtain a model that comprises two antagonistic features, since both are simple, easy to compute and apply, and accurate. One of the main features of this model is how it handles the maximum power point tracking (MPPT) and the efficiency: in both parts the model uses the same approach and it is achieved by two resistive elements which simulate the losses inherent to each parameter. This makes this model easy to implement, compact and refined. The model presented here also includes other parameters, such as start threshold, standby consumption and islanding behavior. In order to validate the model, the values of all the parameters listed above have been obtained and adjusted using field measurements for several commercial inverters, and the behavior of the model applied to a particular inverter has been compared with real data under different working conditions, taken from a facility located in Madrid. The results show a good fit between the model values and the real data. As an example, the model has been implemented in PSPICE electronic simulator, and this approach has been used to teach grid-connected PV systems. The use of this model for the maintenance of working PV facilities is also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The energy spectrum of the confined states of a quantum dot intermediate band (IB) solar cell is calculated with a simplified model. Two peaks are usually visible at the lowest energy side of the subbandgap quantum-efficiency spectrum in these solar cells. They can be attributed to photon absorption between well-defined states. As a consequence, the horizontal size of the quantum dots can be determined, and the conduction (valence) band offset is also determined if the valence (conduction) offset is known.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acknowledgments Financial Support: HERU and HSRU receive a core grant from the Chief Scientist’s Office of the Scottish Government Health and Social Care Directorates, and the Centre for Clinical epidemiology & Evaluation is funded by Vancouver Coastal Health Authority. The model used for the illustrative case study in this paper was developed as part of a NHS Technology Assessment Review, funded by the National Institute for Health Research (NIHR) Health Technology Assessment Program (project number 09/146/01). The views and opinions expressed in this paper are those of the authors and do not necessarily reflect those of the Scottish Government, NHS, Vancouver Coastal Health, NIHR HTA Program or the Department of Health. The authors wish to thank Kathleen Boyd and members of the audience at the UK Health Economists Study Group, for comments received on an earlier version of this paper. We also wish to thank Cynthia Fraser (University of Aberdeen) for literature searches undertaken to inform the manuscript, and Mohsen Sadatsafavi (University of British Columbia) for comments on an earlier draft

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed quantitative kinetic model for the polymerase chain reaction (PCR) is developed, which allows us to predict the probability of replication of a DNA molecule in terms of the physical parameters involved in the system. The important issue of the determination of the number of PCR cycles during which this probability can be considered to be a constant is solved within the framework of the model. New phenomena of multimodality and scaling behavior in the distribution of the number of molecules after a given number of PCR cycles are presented. The relevance of the model for quantitative PCR is discussed, and a novel quantitative PCR technique is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The base following stop codons in mammalian genes is strongly biased, suggesting that it might be important for the termination event. This proposal has been tested experimentally both in vivo by using the human type I iodothyronine deiodinase mRNA and the recoding event at the internal UGA codon and in vitro by measuring the ability of each of the 12 possible 4-base stop signals to direct the eukaryotic polypeptide release factor to release a model peptide, formylmethionine, from the ribosome. The internal UGA in the deiodinase mRNA is used as a codon for incorporation of selenocysteine into the protein. Changing the base following this UGA codon affected the ratio of termination to selenocysteine incorporation in vivo at this codon: 1:3 (C or U) and 3:1 (A or G). These UGAN sequences have the same order of efficiency of termination as was found with the in vitro termination assay (4th base: A approximately G >> C approximately U). The efficiency of in vitro termination varied in the same manner over a 70-fold range for the UAAN series and over an 8-fold range for the UGAN and UAGN series. There is a correlation between the strength of the signals and how frequently they occur at natural termination sites. Together these data suggest that the base following the stop codon influences translational termination efficiency as part of a larger termination signal in the expression of mammalian genes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scrapie is a transmissible neurodegenerative disease that appears to result from an accumulation in the brain of an abnormal protease-resistant isoform of prion protein (PrP) called PrPsc. Conversion of the normal, protease-sensitive form of PrP (PrPc) to protease-resistant forms like PrPsc has been demonstrated in a cell-free reaction composed largely of hamster PrPc and PrPsc. We now report studies of the species specificity of this cell-free reaction using mouse, hamster, and chimeric PrP molecules. Combinations of hamster PrPc with hamster PrPsc and mouse PrPc with mouse PrPsc resulted in the conversion of PrPc to protease-resistant forms. Protease-resistant PrP species were also generated in the nonhomologous reaction of hamster PrPc with mouse PrPsc, but little conversion was observed in the reciprocal reaction. Glycosylation of the PrPc precursors was not required for species specificity in the conversion reaction. The relative conversion efficiencies correlated with the relative transmissibilities of these strains of scrapie between mice and hamsters. Conversion experiments performed with chimeric mouse/hamster PrPc precursors indicated that differences between PrPc and PrPsc at residues 139, 155, and 170 affected the conversion efficiency and the size of the resultant protease-resistant PrP species. We conclude that there is species specificity in the cell-free interactions that lead to the conversion of PrPc to protease-resistant forms. This specificity may be the molecular basis for the barriers to interspecies transmission of scrapie and other transmissible spongiform encephalopathies in vivo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The EPA promulgated the Exceptional Events Rule codifying guidance regarding exclusion of monitoring data from compliance decisions due to uncontrollable natural or exceptional events. This capstone examines documentation systems utilized by agencies requesting data be excluded from compliance decisions due to exceptional events. A screening tool is developed to determine whether an event would meet exceptional event criteria. New data sources are available to enhance analysis but evaluation shows many are unusable in their current form. The EPA and States must collaborate to develop consistent evaluation methodologies documenting exceptional events to improve the efficiency and effectiveness of the new rule. To utilize newer sophisticated data, consistent, user-friendly translation systems must be developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses the productivity growth of the SUMA tax offices located in Spain evolved between 2004 and 2006 by using Malmquist Index based on Data Envelopment Analysis (DEA) models. It goes a step forward by smoothed bootstrap procedure which improves the quality of the results by generalising the samples, so that the conclusions obtained from them can be applied in order to increase productivity levels. Additionally, the productivity effect is divided into two different components, efficiency and technological change, with the objective of helping to clarify the role played by either the managers or the level of technology in the final performance figures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous improvement of management and assessment processes for curricular external internships has led a group of university teachers specialised in this area to develop a mixed model of measurement that combines the verification of skill acquisition by those students choosing external internships with the satisfaction of the parties involved in that process. They included academics, educational tutors of companies and organisations and administration and services personnel in the latter category. The experience, developed within University of Alicante, has been carried out in the degrees of Business Administration and Management, Business Studies, Economics, Advertising and Public Relations, Sociology and Social Work, all part of the Faculty of Economics and Business. By designing and managing closed standardised interviews and other research tools, validated outside the centre, a system of continuous improvement and quality assurance has been created, clearly contributing to the gradual increase in the number of students with internships in this Faculty, as well as to the improvement in satisfaction, efficiency and efficacy indicators at a global level. As this experience of educational innovation has shown, the acquisition of curricular knowledge, skills, abilities and competences by the students is directly correlated with the satisfaction of those parties involved in a process that takes the student beyond the physical borders of a university campus. Ensuring the latter is a task made easier by the implementation of a mixed assessment method, combining continuous and final assessment, and characterised by its rigorousness and simple management. This report presents that model, subject in turn to a persistent and continuous control, a model all parties involved in the external internships are taking part of. Its short-term results imply an increase, estimated at 15% for the last course, in the number of students choosing curricular internships and, for the medium and long-term, a major interweaving between the academic world and its social and productive environment, both in the business and institutional areas. The potentiality of this assessment model does not lie only in the quality of its measurement tools, but also in the effects from its use in the various groups and in the actions that are carried out as a result of its implementation and which, without any doubt and as it is shown below, are the real guarantee of a continuous improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A hydrological–economic model is introduced to describe the dynamics of groundwater-dependent economics (agriculture and tourism) for sustainable use in sparse-data drylands. The Amtoudi Oasis, a remote area in southern Morocco, in the northern Sahara attractive for tourism and with evidence of groundwater degradation, was chosen to show the model operation. Governing system variables were identified and put into action through System Dynamics (SD) modeling causal diagrams to program basic formulations into a model having two modules coupled by the nexus ‘pumping’: (1) the hydrological module represents the net groundwater balance (G) dynamics; and (2) the economic module reproduces the variation in the consumers of water, both the population and tourists. The model was operated under similar influx of tourists and different scenarios of water availability, such as the wet 2009–2010 and the average 2010–2011 hydrological years. The rise in international tourism is identified as the main driving force reducing emigration and introducing new social habits in the population, in particular concerning water consumption. Urban water allotment (PU) was doubled for less than a 100-inhabitant net increase in recent decades. The water allocation for agriculture (PI), the largest consumer of water, had remained constant for decades. Despite that the 2-year monitoring period is not long enough to draw long-term conclusions, groundwater imbalance was reflected by net aquifer recharge (R) less than PI + PU (G < 0) in the average year 2010–2011, with net lateral inflow from adjacent Cambrian formations being the largest recharge component. R is expected to be much less than PI + PU in recurrent dry spells. Some low-technology actions are tentatively proposed to mitigate groundwater degradation, such as: wastewater capture, treatment, and reuse for irrigation; storm-water harvesting for irrigation; and active maintenance of the irrigation system to improve its efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A driving argument behind recent EU treaty reforms was that more qualified majority voting (QMV) was required to reduce the potential dangers of legislative paralysis caused by enlargement. Whilst existing literature on enlargement mostly focuses on the question of what changed in the legislative process after the 2004 enlargement, the question of why these changes occurred has been given far less attention. Through the use of a single veto player theoretical model, this paper seeks to test and explain whether enlargement reduces the efficiency of the legislative process and alters the type of legislation produced, and whether QMV can compensate for these effects. In doing this, it offers a theoretical explanation as to why institutional changes that alter the level of cohesion between actors in the Council have an influence over both the legislative process and its outcomes.