997 resultados para Multi choices


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Goal Programming (GP) is an important analytical approach devised to solve many realworld problems. The first GP model is known as Weighted Goal Programming (WGP). However, Multi-Choice Aspirations Level (MCAL) problems cannot be solved by current GP techniques. In this paper, we propose a Multi-Choice Mixed Integer Goal Programming model (MCMI-GP) for the aggregate production planning of a Brazilian sugar and ethanol milling company. The MC-MIGP model was based on traditional selection and process methods for the design of lots, representing the production system of sugar, alcohol, molasses and derivatives. The research covers decisions on the agricultural and cutting stages, sugarcane loading and transportation by suppliers and, especially, energy cogeneration decisions; that is, the choice of production process, including storage stages and distribution. The MCMIGP allows decision-makers to set multiple aspiration levels for their problems in which the more/higher, the better and the less/lower, the better in the aspiration levels are addressed. An application of the proposed model for real problems in a Brazilian sugar and ethanol mill was conducted; producing interesting results that are herein reported and commented upon. Also, it was made a comparison between MCMI GP and WGP models using these real cases. © 2013 Elsevier Inc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The valuation of human costs is a necessity, but this task poses many problems of method. A team made of a philosopher, a psychologist and a physician has been working with economist researchers in order to look into the meaning that the preferences announced at the time of the inquiries on human costs by QALY methods could assume. These methods are often used to obtain a valuation of the impact of a health attack on people's quality of life. The methods--in the frame of the argument assumed by the economic theory on well-being--hypothesize that people's choices depend mainly on cognitive work. The qualitative interviews show that the psychological construction process for the announced preferences largely overlap this frame. In this paper the authors hastily tackle the factors which have an effect on the preferences. They conclude that the QALY methods don't seem to be able to assess the quality of life nori to valuate the damage that the quality of life could include.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demand for new telecommunication services requiring higher capacities, data rates and different operating modes have motivated the development of new generation multi-standard wireless transceivers. In multistandard design, sigma-delta based ADC is one of the most popular choices. To this end, in this paper we present cascaded 2-2-2 reconfigurable sigma-delta modulator that can handle GSM, WCDMA and WLAN standards. The modulator makes use of a low-distortion swing suppression topology which is highly suitable for wide band applications. In GSM mode, only the first stage (2nd order Σ-Δ ADC) is used to achieve a peak SNDR of 88dB with oversampling ratio of 160 for a bandwidth of 200KHz and for WCDMA mode a 2-2 cascaded structure (4th order) is turned on with 1-bit in the first stage and 2-bit in the second stage to achieve 74 dB peak SNDR with over-sampling ratio of 16 for a bandwidth of 2MHz. Finally, a 2-2-2 cascaded MASH architecture with 4-bit in the last stage is proposed to achieve a peak SNDR of 58dB for WLAN for a bandwidth of 20MHz. The novelty lies in the fact that unused blocks of second and third stages can be made inactive to achieve low power consumption. The modulator is designed in TSMC 0.18um CMOS technology and operates at 1.8 supply voltage

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Multinationals have always needed an operating model that works – an effective plan for executing their most important activities at the right levels of their organization, whether globally, regionally or locally. The choices involved in these decisions have never been obvious, since international firms have consistently faced trade‐offs between tailoring approaches for diverse local markets and leveraging their global scale. This paper seeks a more in‐depth understanding of how successful firms manage the global‐local trade‐off in a multipolar world. Design methodology/approach – This paper utilizes a case study approach based on in‐depth senior executive interviews at several telecommunications companies including Tata Communications. The interviews probed the operating models of the companies we studied, focusing on their approaches to organization structure, management processes, management technologies (including information technology (IT)) and people/talent. Findings – Successful companies balance global‐local trade‐offs by taking a flexible and tailored approach toward their operating‐model decisions. The paper finds that successful companies, including Tata Communications, which is profiled in‐depth, are breaking up the global‐local conundrum into a set of more manageable strategic problems – what the authors call “pressure points” – which they identify by assessing their most important activities and capabilities and determining the global and local challenges associated with them. They then design a different operating model solution for each pressure point, and repeat this process as new strategic developments emerge. By doing so they not only enhance their agility, but they also continually calibrate that crucial balance between global efficiency and local responsiveness. Originality/value – This paper takes a unique approach to operating model design, finding that an operating model is better viewed as several distinct solutions to specific “pressure points” rather than a single and inflexible model that addresses all challenges equally. Now more than ever, developing the right operating model is at the top of multinational executives' priorities, and an area of increasing concern; the international business arena has changed drastically, requiring thoughtfulness and flexibility instead of standard formulas for operating internationally. Old adages like “think global and act local” no longer provide the universal guidance they once seemed to.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes bibliography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las alteraciones del sistema climático debido al aumento de concentraciones de gases de efecto invernadero (GEI) en la atmósfera, tendrán implicaciones importantes para la agricultura, el medio ambiente y la sociedad. La agricultura es una fuente importante de emisiones de gases de efecto invernadero (globalmente contribuye al 12% del total de GEI), y al mismo tiempo puede ser parte de la solución para mitigar las emisiones y adaptarse al cambio climático. Las acciones frente al desafío del cambio climático deben priorizar estrategias de adaptación y mitigación en la agricultura dentro de la agenda para el desarrollo de políticas. La agricultura es por tanto crucial para la conservación y el uso sostenible de los recursos naturales, que ya están sometidos a impactos del cambio climático, al mismo tiempo que debe suministrar alimentos para una población creciente. Por tanto, es necesaria una coordinación entre las actuales estrategias de política climática y agrícola. El concepto de agricultura climáticamente inteligente ha surgido para integrar todos estos servicios de la producción agraria. Al evaluar opciones para reducir las amenazas del cambio climático para la agricultura y el medio ambiente, surgen dos preguntas de investigación: • ¿Qué información es necesaria para definir prácticas agrarias inteligentes? • ¿Qué factores influyen en la implementación de las prácticas agrarias inteligentes? Esta Tesis trata de proporcionar información relevante sobre estas cuestiones generales con el fin de apoyar el desarrollo de la política climática. Se centra en sistemas agrícolas Mediterráneos. Esta Tesis integra diferentes métodos y herramientas para evaluar las alternativas de gestión agrícola y políticas con potencial para responder a las necesidades de mitigación y adaptación al cambio climático. La investigación incluye enfoques cuantitativos y cualitativos e integra variables agronómicas, de clima y socioeconómicas a escala local y regional. La investigación aporta una recopilación de datos sobre evidencia experimental existente, y un estudio integrado sobre el comportamiento de los agricultores y las posibles alternativas de cambio (por ejemplo, la tecnología, la gestión agrícola y la política climática). Los casos de estudio de esta Tesis - el humedal de Doñana (S España) y la región de Aragón (NE España) - permiten ilustrar dos sistemas Mediterráneos representativos, donde el uso intensivo de la agricultura y las condiciones semiáridas son ya una preocupación. Por este motivo, la adopción de estrategias de mitigación y adaptación puede desempeñar un papel muy importante a la hora de encontrar un equilibrio entre la equidad, la seguridad económica y el medio ambiente en los escenarios de cambio climático. La metodología multidisciplinar de esta tesis incluye una amplia gama de enfoques y métodos para la recopilación y el análisis de datos. La toma de datos se apoya en la revisión bibliográfica de evidencia experimental, bases de datos públicas nacionales e internacionales y datos primarios recopilados mediante entrevistas semi-estructuradas con los grupos de interés (administraciones públicas, responsables políticos, asesores agrícolas, científicos y agricultores) y encuestas con agricultores. Los métodos de análisis incluyen: meta-análisis, modelos de gestión de recursos hídricos (modelo WAAPA), análisis multicriterio para la toma de decisiones, métodos estadísticos (modelos de regresión logística y de Poisson) y herramientas para el desarrollo de políticas basadas en la ciencia. El meta-análisis identifica los umbrales críticos de temperatura que repercuten en el crecimiento y el desarrollo de los tres cultivos principales para la seguridad alimentaria (arroz, maíz y trigo). El modelo WAAPA evalúa el efecto del cambio climático en la gestión del agua para la agricultura de acuerdo a diferentes alternativas políticas y escenarios climáticos. El análisis multicriterio evalúa la viabilidad de las prácticas agrícolas de mitigación en dos escenarios climáticos de acuerdo a la percepción de diferentes expertos. Los métodos estadísticos analizan los determinantes y las barreras para la adopción de prácticas agrícolas de mitigación. Las herramientas para el desarrollo de políticas basadas en la ciencia muestran el potencial y el coste para reducir GEI mediante las prácticas agrícolas. En general, los resultados de esta Tesis proporcionan información sobre la adaptación y la mitigación del cambio climático a nivel de explotación para desarrollar una política climática más integrada y ayudar a los agricultores en la toma de decisiones. Los resultados muestran las temperaturas umbral y la respuesta del arroz, el maíz y el trigo a temperaturas extremas, siendo estos valores de gran utilidad para futuros estudios de impacto y adaptación. Los resultados obtenidos también aportan una serie de estrategias flexibles para la adaptación y la mitigación a escala local, proporcionando a su vez una mejor comprensión sobre las barreras y los incentivos para su adopción. La capacidad de mejorar la disponibilidad de agua y el potencial y el coste de reducción de GEI se han estimado para estas estrategias en los casos de estudio. Estos resultados podrían ayudar en el desarrollo de planes locales de adaptación y políticas regionales de mitigación, especialmente en las regiones Mediterráneas. ABSTRACT Alterations in the climatic system due to increased atmospheric concentrations of greenhouse gas emissions (GHG) are expected to have important implications for agriculture, the environment and society. Agriculture is an important source of GHG emissions (12 % of global anthropogenic GHG), but it is also part of the solution to mitigate emissions and to adapt to climate change. Responses to face the challenge of climate change should place agricultural adaptation and mitigation strategies at the heart of the climate change agenda. Agriculture is crucial for the conservation and sustainable use of natural resources, which already stand under pressure due to climate change impacts, increased population, pollution and fragmented and uncoordinated climate policy strategies. The concept of climate smart agriculture has emerged to encompass all these issues as a whole. When assessing choices aimed at reducing threats to agriculture and the environment under climate change, two research questions arise: • What information defines smart farming choices? • What drives the implementation of smart farming choices? This Thesis aims to provide information on these broad questions in order to support climate policy development focusing in some Mediterranean agricultural systems. This Thesis integrates methods and tools to evaluate potential farming and policy choices to respond to mitigation and adaptation to climate change. The assessment involves both quantitative and qualitative approaches and integrates agronomic, climate and socioeconomic variables at local and regional scale. The assessment includes the collection of data on previous experimental evidence, and the integration of farmer behaviour and policy choices (e.g., technology, agricultural management and climate policy). The case study areas -- the Doñana coastal wetland (S Spain) and the Aragón region (NE Spain) – illustrate two representative Mediterranean regions where the intensive use of agriculture and the semi-arid conditions are already a concern. Thus the adoption of mitigation and adaptation measures can play a significant role for reaching a balance among equity, economic security and the environment under climate change scenarios. The multidisciplinary methodology of this Thesis includes a wide range of approaches for collecting and analysing data. The data collection process include revision of existing experimental evidence, public databases and the contribution of primary data gathering by semi-structured interviews with relevant stakeholders (i.e., public administrations, policy makers, agricultural advisors, scientist and farmers among others) and surveys given to farmers. The analytical methods include meta-analysis, water availability models (WAAPA model), decision making analysis (MCA, multi-criteria analysis), statistical approaches (Logistic and Poisson regression models) and science-base policy tools (MACC, marginal abatement cost curves and SOC abatement wedges). The meta-analysis identifies the critical temperature thresholds which impact on the growth and development of three major crops (i.e., rice, maize and wheat). The WAAPA model assesses the effect of climate change for agricultural water management under different policy choices and climate scenarios. The multi-criteria analysis evaluates the feasibility of mitigation farming practices under two climate scenarios according to the expert views. The statistical approaches analyses the drivers and the barriers for the adoption of mitigation farming practices. The science-base policy tools illustrate the mitigation potential and cost effectiveness of the farming practices. Overall, the results of this Thesis provide information to adapt to, and mitigate of, climate change at farm level to support the development of a comprehensive climate policy and to assist farmers. The findings show the key temperature thresholds and response to extreme temperature effects for rice, maize and wheat, so such responses can be included into crop impact and adaptation models. A portfolio of flexible adaptation and mitigation choices at local scale are identified. The results also provide a better understanding of the stakeholders oppose or support to adopt the choices which could be used to incorporate in local adaptation plans and mitigation regional policy. The findings include estimations for the farming and policy choices on the capacity to improve water supply reliability, abatement potential and cost-effective in Mediterranean regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is growing popularity in the use of composite indices and rankings for cross-organizational benchmarking. However, little attention has been paid to alternative methods and procedures for the computation of these indices and how the use of such methods may impact the resulting indices and rankings. This dissertation developed an approach for assessing composite indices and rankings based on the integration of a number of methods for aggregation, data transformation and attribute weighting involved in their computation. The integrated model developed is based on the simulation of composite indices using methods and procedures proposed in the area of multi-criteria decision making (MCDM) and knowledge discovery in databases (KDD). The approach developed in this dissertation was automated through an IT artifact that was designed, developed and evaluated based on the framework and guidelines of the design science paradigm of information systems research. This artifact dynamically generates multiple versions of indices and rankings by considering different methodological scenarios according to user specified parameters. The computerized implementation was done in Visual Basic for Excel 2007. Using different performance measures, the artifact produces a number of excel outputs for the comparison and assessment of the indices and rankings. In order to evaluate the efficacy of the artifact and its underlying approach, a full empirical analysis was conducted using the World Bank's Doing Business database for the year 2010, which includes ten sub-indices (each corresponding to different areas of the business environment and regulation) for 183 countries. The output results, which were obtained using 115 methodological scenarios for the assessment of this index and its ten sub-indices, indicated that the variability of the component indicators considered in each case influenced the sensitivity of the rankings to the methodological choices. Overall, the results of our multi-method assessment were consistent with the World Bank rankings except in cases where the indices involved cost indicators measured in per capita income which yielded more sensitive results. Low income level countries exhibited more sensitivity in their rankings and less agreement between the benchmark rankings and our multi-method based rankings than higher income country groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BRCA1 and BRCA2 are the most frequently mutated genes in ovarian cancer (OC), crucial both for the identification of cancer predisposition and therapeutic choices. However, germline variants in other genes could be involved in OC susceptibility. We characterized OC patients to detect mutations in genes other than BRCA1/2 that could be associated with a high risk to develop OC, and that could permit patients to enter the most appropriate treatment and surveillance program. Next-Generation Sequencing analysis with a 94-gene panel was performed on germline DNA of 219 OC patients. We identified 34 pathogenic/likely-pathogenic variants in BRCA1/2 and 38 in other 21 genes. Patients with pathogenic/likely-pathogenic variants in non-BRCA1/2 genes developed mainly OC alone compared to the other groups that developed also breast cancer or other tumors (p=0.001). Clinical correlation analysis showed that low-risk patients were significantly associated with platinum sensitivity (p<0.001). Regarding PARP inhibitors (PARPi) response, patients with pathogenic mutations in non-BRCA1/2 genes had significantly worse PFS and OS. Moreover, a statistically significant worse PFS was found for every increase of one thousand platelets before PARPi treatment. To conclude, knowledge about molecular alterations in genes beyond BRCA1/2 in OC could allow for more personalized diagnostic, predictive, prognostic, and therapeutic strategies for OC patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Brazil, the consumption of extra-virgin olive oil (EVOO) is increasing annually, but there are no experimental studies concerning the phenolic compound contents of commercial EVOO. The aim of this work was to optimise the separation of 17 phenolic compounds already detected in EVOO. A Doehlert matrix experimental design was used, evaluating the effects of pH and electrolyte concentration. Resolution, runtime and migration time relative standard deviation values were evaluated. Derringer's desirability function was used to simultaneously optimise all 37 responses. The 17 peaks were separated in 19min using a fused-silica capillary (50μm internal diameter, 72cm of effective length) with an extended light path and 101.3mmolL(-1) of boric acid electrolyte (pH 9.15, 30kV). The method was validated and applied to 15 EVOO samples found in Brazilian supermarkets.