897 resultados para Simulation-based methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of seismic hysteretic dampers for passive control is increasing exponentially in recent years for both new and existing buildings. In order to utilize hysteretic dampers within a structural system, it is of paramount importance to have simplified design procedures based upon knowledge gained from theoretical studies and validated with experimental results. Non-linear Static Procedures (NSPs) are presented as an alternative to the force-based methods more common nowadays. The application of NSPs to conventional structures has been well established; yet there is a lack of experimental information on how NSPs apply to systems with hysteretic dampers. In this research, several shaking table tests were conducted on two single bay and single story 1:2 scale structures with and without hysteretic dampers. The maximum response of the structure with dampers in terms of lateral displacement and base shear obtained from the tests was compared with the prediction provided by three well-known NSPs: (1) the improved version of the Capacity Spectrum Method (CSM) from FEMA 440; (2) the improved version of the Displacement Coefficient Method (DCM) from FEMA 440; and (3) the N2 Method implemented in Eurocode 8. In general, the improved version of the DCM and N2 methods are found to provide acceptable accuracy in prediction, but the CSM tends to underestimate the response.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ripple-based controls can strongly reduce the required output capacitance in PowerSoC converter thanks to a very fast dynamic response. Unfortunately, these controls are prone to sub-harmonic oscillations and several parameters affect the stability of these systems. This paper derives and validates a simulation-based modeling and stability analysis of a closed-loop V 2Ic control applied to a 5 MHz Buck converter using discrete modeling and Floquet theory to predict stability. This allows the derivation of sensitivity analysis to design robust systems. The work is extended to different V 2 architectures using the same methodology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is essential to remotely and continuously monitor the movements of individuals in many social areas, for example, taking care of aging people, physical therapy, athletic training etc. Many methods have been used, such as video record, motion analysis or sensor-based methods. Due to the limitations in remote communication, power consumption, portability and so on, most of them are not able to fulfill the requirements. The development of wearable technology and cloud computing provides a new efficient way to achieve this goal. This paper presents an intelligent human movement monitoring system based on a smartwatch, an Android smartphone and a distributed data management engine. This system includes advantages of wide adaptability, remote and long-term monitoring capacity, high portability and flexibility. The structure of the system and its principle are introduced. Four experiments are designed to prove the feasibility of the system. The results of the experiments demonstrate the system is able to detect different actions of individuals with adequate accuracy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The operating theatres are the engine of the hospitals; proper management of the operating rooms and its staff represents a great challenge for managers and its results impact directly in the budget of the hospital. This work presents a MILP model for the efficient schedule of multiple surgeries in Operating Rooms (ORs) during a working day. This model considers multiple surgeons and ORs and different types of surgeries. Stochastic strategies are also implemented for taking into account the uncertain in surgery durations (pre-incision, incision, post-incision times). In addition, a heuristic-based methods and a MILP decomposition approach is proposed for solving large-scale ORs scheduling problems in computational efficient way. All these computer-aided strategies has been implemented in AIMMS, as an advanced modeling and optimization software, developing a user friendly solution tool for the operating room management under uncertainty.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los sistemas transaccionales tales como los programas informáticos para la planificación de recursos empresariales (ERP software) se han implementado ampliamente mientras que los sistemas analíticos para la gestión de la cadena de suministro (SCM software) no han tenido el éxito deseado por la industria de tecnología de información (TI). Aunque se documentan beneficios importantes derivados de las implantaciones de SCM software, las empresas industriales son reacias a invertir en este tipo de sistemas. Por una parte esto es debido a la falta de métodos que son capaces de detectar los beneficios por emplear esos sistemas, y por otra parte porque el coste asociado no está identificado, detallado y cuantificado suficientemente. Los esquemas de coordinación basados únicamente en sistemas ERP son alternativas válidas en la práctica industrial siempre que la relación coste-beneficio esta favorable. Por lo tanto, la evaluación de formas organizativas teniendo en cuenta explícitamente el coste debido a procesos administrativos, en particular por ciclos iterativos, es de gran interés para la toma de decisiones en el ámbito de inversiones en TI. Con el fin de cerrar la brecha, el propósito de esta investigación es proporcionar métodos de evaluación que permitan la comparación de diferentes formas de organización y niveles de soporte por sistemas informáticos. La tesis proporciona una amplia introducción, analizando los retos a los que se enfrenta la industria. Concluye con las necesidades de la industria de SCM software: unas herramientas que facilitan la evaluación integral de diferentes propuestas de organización. A continuación, la terminología clave se detalla centrándose en la teoría de la organización, las peculiaridades de inversión en TI y la tipología de software de gestión de la cadena de suministro. La revisión de la literatura clasifica las contribuciones recientes sobre la gestión de la cadena de suministro, tratando ambos conceptos, el diseño de la organización y su soporte por las TI. La clasificación incluye criterios relacionados con la metodología de la investigación y su contenido. Los estudios empíricos en el ámbito de la administración de empresas se centran en tipologías de redes industriales. Nuevos algoritmos de planificación y esquemas de coordinación innovadoras se desarrollan principalmente en el campo de la investigación de operaciones con el fin de proponer nuevas funciones de software. Artículos procedentes del área de la gestión de la producción se centran en el análisis de coste y beneficio de las implantaciones de sistemas. La revisión de la literatura revela que el éxito de las TI para la coordinación de redes industriales depende en gran medida de características de tres dimensiones: la configuración de la red industrial, los esquemas de coordinación y las funcionalidades del software. La literatura disponible está enfocada sobre todo en los beneficios de las implantaciones de SCM software. Sin embargo, la coordinación de la cadena de suministro, basándose en el sistema ERP, sigue siendo la práctica industrial generalizada, pero el coste de coordinación asociado no ha sido abordado por los investigadores. Los fundamentos de diseño organizativo eficiente se explican en detalle en la medida necesaria para la comprensión de la síntesis de las diferentes formas de organización. Se han generado varios esquemas de coordinación variando los siguientes parámetros de diseño: la estructura organizativa, los mecanismos de coordinación y el soporte por TI. Las diferentes propuestas de organización desarrolladas son evaluadas por un método heurístico y otro basado en la simulación por eventos discretos. Para ambos métodos, se tienen en cuenta los principios de la teoría de la organización. La falta de rendimiento empresarial se debe a las dependencias entre actividades que no se gestionan adecuadamente. Dentro del método heurístico, se clasifican las dependencias y se mide su intensidad basándose en factores contextuales. A continuación, se valora la idoneidad de cada elemento de diseño organizativo para cada dependencia específica. Por último, cada forma de organización se evalúa basándose en la contribución de los elementos de diseño tanto al beneficio como al coste. El beneficio de coordinación se refiere a la mejora en el rendimiento logístico - este concepto es el objeto central en la mayoría de modelos de evaluación de la gestión de la cadena de suministro. Por el contrario, el coste de coordinación que se debe incurrir para lograr beneficios no se suele considerar en detalle. Procesos iterativos son costosos si se ejecutan manualmente. Este es el caso cuando SCM software no está implementada y el sistema ERP es el único instrumento de coordinación disponible. El modelo heurístico proporciona un procedimiento simplificado para la clasificación sistemática de las dependencias, la cuantificación de los factores de influencia y la identificación de configuraciones que indican el uso de formas organizativas y de soporte de TI más o menos complejas. La simulación de eventos discretos se aplica en el segundo modelo de evaluación utilizando el paquete de software ‘Plant Simulation’. Con respecto al rendimiento logístico, por un lado se mide el coste de fabricación, de inventario y de transporte y las penalizaciones por pérdida de ventas. Por otro lado, se cuantifica explícitamente el coste de la coordinación teniendo en cuenta los ciclos de coordinación iterativos. El método se aplica a una configuración de cadena de suministro ejemplar considerando diversos parámetros. Los resultados de la simulación confirman que, en la mayoría de los casos, el beneficio aumenta cuando se intensifica la coordinación. Sin embargo, en ciertas situaciones en las que se aplican ciclos de planificación manuales e iterativos el coste de coordinación adicional no siempre conduce a mejor rendimiento logístico. Estos resultados inesperados no se pueden atribuir a ningún parámetro particular. La investigación confirma la gran importancia de nuevas dimensiones hasta ahora ignoradas en la evaluación de propuestas organizativas y herramientas de TI. A través del método heurístico se puede comparar de forma rápida, pero sólo aproximada, la eficiencia de diferentes formas de organización. Por el contrario, el método de simulación es más complejo pero da resultados más detallados, teniendo en cuenta parámetros específicos del contexto del caso concreto y del diseño organizativo. ABSTRACT Transactional systems such as Enterprise Resource Planning (ERP) systems have been implemented widely while analytical software like Supply Chain Management (SCM) add-ons are adopted less by manufacturing companies. Although significant benefits are reported stemming from SCM software implementations, companies are reluctant to invest in such systems. On the one hand this is due to the lack of methods that are able to detect benefits from the use of SCM software and on the other hand associated costs are not identified, detailed and quantified sufficiently. Coordination schemes based only on ERP systems are valid alternatives in industrial practice because significant investment in IT can be avoided. Therefore, the evaluation of these coordination procedures, in particular the cost due to iterations, is of high managerial interest and corresponding methods are comprehensive tools for strategic IT decision making. The purpose of this research is to provide evaluation methods that allow the comparison of different organizational forms and software support levels. The research begins with a comprehensive introduction dealing with the business environment that industrial networks are facing and concludes highlighting the challenges for the supply chain software industry. Afterwards, the central terminology is addressed, focusing on organization theory, IT investment peculiarities and supply chain management software typology. The literature review classifies recent supply chain management research referring to organizational design and its software support. The classification encompasses criteria related to research methodology and content. Empirical studies from management science focus on network types and organizational fit. Novel planning algorithms and innovative coordination schemes are developed mostly in the field of operations research in order to propose new software features. Operations and production management researchers realize cost-benefit analysis of IT software implementations. The literature review reveals that the success of software solutions for network coordination depends strongly on the fit of three dimensions: network configuration, coordination scheme and software functionality. Reviewed literature is mostly centered on the benefits of SCM software implementations. However, ERP system based supply chain coordination is still widespread industrial practice but the associated coordination cost has not been addressed by researchers. Fundamentals of efficient organizational design are explained in detail as far as required for the understanding of the synthesis of different organizational forms. Several coordination schemes have been shaped through the variation of the following design parameters: organizational structuring, coordination mechanisms and software support. The different organizational proposals are evaluated using a heuristic approach and a simulation-based method. For both cases, the principles of organization theory are respected. A lack of performance is due to dependencies between activities which are not managed properly. Therefore, within the heuristic method, dependencies are classified and their intensity is measured based on contextual factors. Afterwards the suitability of each organizational design element for the management of a specific dependency is determined. Finally, each organizational form is evaluated based on the contribution of the sum of design elements to coordination benefit and to coordination cost. Coordination benefit refers to improvement in logistic performance – this is the core concept of most supply chain evaluation models. Unfortunately, coordination cost which must be incurred to achieve benefits is usually not considered in detail. Iterative processes are costly when manually executed. This is the case when SCM software is not implemented and the ERP system is the only available coordination instrument. The heuristic model provides a simplified procedure for the classification of dependencies, quantification of influence factors and systematic search for adequate organizational forms and IT support. Discrete event simulation is applied in the second evaluation model using the software package ‘Plant Simulation’. On the one hand logistic performance is measured by manufacturing, inventory and transportation cost and penalties for lost sales. On the other hand coordination cost is explicitly considered taking into account iterative coordination cycles. The method is applied to an exemplary supply chain configuration considering various parameter settings. The simulation results confirm that, in most cases, benefit increases when coordination is intensified. However, in some situations when manual, iterative planning cycles are applied, additional coordination cost does not always lead to improved logistic performance. These unexpected results cannot be attributed to any particular parameter. The research confirms the great importance of up to now disregarded dimensions when evaluating SCM concepts and IT tools. The heuristic method provides a quick, but only approximate comparison of coordination efficiency for different organizational forms. In contrast, the more complex simulation method delivers detailed results taking into consideration specific parameter settings of network context and organizational design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta tesis trata sobre la construcción modular ligera, dentro del contexto de la eficiencia energética y de cara a los conceptos de nZEB (near Zero Energy Building) y NZEB (Net Zero Energy Building) que se manejan en el ámbito europeo y específicamente dentro del marco regulador de la Directiva 2010/31 UE. En el contexto de la Unión Europea, el sector de la edificación representa el 40% del total del consumo energético del continente. Asumiendo la necesidad de reducir este consumo se han planteado, desde los organismos de dirección europeos, unos objetivos (objetivos 20-20-20) para hacer más eficiente el parque edificatorio. Estos objetivos, que son vinculantes en términos de legislación, comprometen a todos los estados miembros a conseguir la meta de reducción de consumo y emisiones de GEI (Gases de Efecto Invernadero) antes del año 2020. Estos conceptos de construcción modular ligera (CML) y eficiencia energética no suelen estar asociados por el hecho de que este tipo de construcción no suele estar destinada a un uso intensivo y no cuenta con unos cerramientos con niveles de aislamiento de acuerdo a las normativas locales o códigos de edificación de cada país. El objetivo de nZEB o NZEB, e incluso Energy Plus, según sea el caso, necesariamente (y así queda establecido en las normativas), dependerá no sólo de la mejora de los niveles de aislamiento de los edificios, sino también de la implementación de sistemas de generación renovables, independientemente del tipo de sistema constructivo con el que se trabaje e incluso de la tipología edificatoria. Si bien es cierto que los niveles de industrialización de la sociedad tecnológica actual han alcanzado varias de las fases del proceso constructivo - sobre todo en cuanto a elementos compositivos de los edificios- también lo es el hecho de que las cotas de desarrollo conseguidas en el ámbito de la construcción no llegan al nivel de evolución que se puede apreciar en otros campos de las ingenierías como la aeronáutica o la industria del automóvil. Aunque desde finales del siglo pasado existen modelos y proyectos testimoniales de construcción industrializada ligera (CIL) e incluso ya a principios del siglo XX, ejemplos de construcción modular ligera (CML), como la Casa Voisin, la industrialización de la construcción de edificios no ha sido una constante progresiva con un nivel de comercialización equiparable al de la construcción masiva y pesada. Los términos construcción industrializada, construcción prefabricada, construcción modular y construcción ligera, no siempre hacen referencia a lo mismo y no siempre son sinónimos entre sí. Un edificio puede ser prefabricado y no ser modular ni ligero y tal es el caso, por poner un ejemplo, de la construcción con paneles de hormigón prefabricado. Lo que sí es una constante es que en el caso de la construcción modular ligera, la prefabricación y la industrialización, casi siempre vienen implícitas en muchos ejemplos históricos y actuales. Con relación al concepto de eficiencia energética (nZEB o incluso NZEB), el mismo no suele estar ligado a la construcción modular ligera y/o ligera industrializada; más bien se le ve unido a la idea de cerramientos masivos con gran inercia térmica propios de estándares de diseño como el Passivhaus; y aunque comúnmente a la construcción ligera se le asocian otros conceptos que le restan valor (corta vida útil; función y formas limitadas, fuera de todo orden estético; limitación en los niveles de confort, etc.), los avances que se van alcanzando en materia de tecnologías para el aprovechamiento de la energía y sistemas de generación renovables, pueden conseguir revertir estas ideas y unificar el criterio de eficiencia + construcción modular ligera. Prototipos y proyectos académicos– como el concurso Solar Decathlon que se celebra desde el año 2002 promovido por el DOE (Departamento de Energía de los Estados Unidos), y que cuenta con ediciones europeas como las de los años 2010 y 2012, replantean la idea de la construcción industrializada, modular y ligera dentro del contexto de la eficiencia energética, con prototipos de viviendas de ± 60m2, propuestos por las universidades concursantes, y cuyo objetivo es alcanzar y/o desarrollar el concepto de NZEB (Net Zero Energy Building) o edificio de energía cero. Esta opción constructiva no sólo representa durabilidad, seguridad y estética, sino también, rapidez en la fabricación y montaje, además de altas prestaciones energéticas como se ha podido demostrar en las sucesivas ediciones del Solar Decathlon. Este tipo de iniciativas de desarrollo de tecnologías constructivas, no sólo apuntan a la eficiencia energética sino al concepto global de energía neta, Energía plus o cero emisiones de CO2. El nivel de emisiones por la fabricación y puesta en obra de los materiales de construcción depende, en muchos casos, no solo de la propia naturaleza del material, sino también de la cantidad de recursos utilizados para producir una unidad de medida determinada (kg, m3, m2, ml, etc). En este sentido podría utilizarse, en muchos casos, el argumento válido de que a menos peso, y a menos tamaño, menos emisiones globales de gases de efecto invernadero y menos contaminación. Para el trabajo de investigación de esta tesis se han tomado como referencias válidas para estudio, prototipos tanto de CML (Modular 3D) como de CIL (panelizado y elementos 2D), dado que para los fines de análisis de las prestaciones energéticas de los materiales de cerramiento, ambos sistemas son equiparables. Para poder llegar a la conclusión fundamental de este trabajo de tesis doctoral - que consiste en demostrar la viabilidad tecnológica/ industrial que supone la combinación de la eficiencia energética y la construcción modular ligera - se parte del estudio del estado de la técnica ( desde la selección de los materiales y los posibles procesos de industrialización en fábrica, hasta su puesta en obra, funcionamiento y uso, bajo los conceptos de consumo cero, cero emisiones de carbono y plus energético). Además -y con un estado de la técnica que identifica la situación actual- se llevan a cabo pruebas y ensayos con un prototipo a escala natural y células de ensayo, para comprobar el comportamiento de los elementos compositivos de los mismos, frente a unas condicionantes climáticas determinadas. Este tipo de resultados se contrastan con los obtenidos mediante simulaciones informáticas basadas en los mismos parámetros y realizadas en su mayoría mediante métodos simplificados de cálculos, validados por los organismos competentes en materia de eficiencia energética en la edificación en España y de acuerdo a la normativa vigente. ABSTRACT This thesis discusses lightweight modular construction within the context of energy efficiency in nZEB (near Zero Energy Building) and NZEB (Net Zero Energy Building) both used in Europe and, specifically, within the limits of the regulatory framework of the EU Directive 2010/31. In the European Union the building sector represents 40% of the total energy consumption of the continent. Due to the need to reduce this consumption, European decision-making institutions have proposed aims (20-20-20 aims) to render building equipment more efficient. These aims are bound by law and oblige all member States to endeavour to reduce consumption and GEI emissions before the year 2020. Lightweight modular construction concepts and energy efficiency are not generally associated because this type of building is not normally meant for intensive use and does not have closures with insulation levels which fit the local regulations or building codes of each country. The objective of nZEB or NZEB and even Energy Plus, depending on each case, will necessarily be associated (as established in the guidelines) not only with the improvement of insulation levels in buildings, but also with the implementation of renewable systems of generation, independent of the type of building system used and of the building typology. Although it is true that the levels of industrialisation in the technological society today have reached several of the building process phases - particularly in the composite elements of buildings - it is also true that the quotas of development achieved in the area of construction have not reached the evolutionary levelfound in other fields of engineering, such as aeronautics or the automobile industry. Although there have been models and testimonial projects of lightweight industrialised building since the end of last century, even going back as far as the beginning of the XX century with examples of lightweight modular construction such as the Voisin House, industrialisation in the building industry has not been constant nor is its comercialisation comparable to massive and heavy construction. The terms industrialised building, prefabricated building, modular building and lightweight building, do not always refer to the same thing and they are not always synonymous. A building can be prefabricated yet not be modular or lightweight. To give an example, this is the case of building with prefabricated concrete panels. What is constant is that, in the case of lightweight modular construction, prefabrication and industrialisation are almost always implicit in many historical and contemporary examples. Energy efficiency (nZEB or even NZEB) is not normally linked to lightweight modular construction and/or industrialised lightweight; rather, it is united to the idea of massive closureswith high thermal inertia typical of design standards such as the Passive House; and although other concepts that subtract value from it are generally associated with lightweight building (short useful life, limited forms and function, inappropriate toany aesthetic pattern; limitation in comfort levels, etc.), the advances being achieved in technology for benefitting from energy and renewable systems of generation may well reverse these ideas and unify the criteria of efficiency + lightweight modular construction. Academic prototypes and projects - such as the Solar Decathlon competition organised by the US Department of Energy and celebrated since 2002, with its corresponding European events such as those held in 2010 and 2012, place a different slant on the idea of industrialised, modular and lightweight building within the context of energy efficiency, with prototypes of homes measuring approximately 60m2, proposed by university competitors, whose aim is to reach and/or develop the NZEB concept, or the zero energy building. This building option does not only signify durability, security and aesthetics, but also fast manufacture and assembly. It also has high energy benefits, as has been demonstrated in successive events of the Solar Decathlon. This type of initiative for the development of building technologies, does not only aim at energy efficiency, but also at the global concept of net energy, Energy Plus and zero CO2 emissions. The level of emissions in the manufacture and introduction of building materials in many cases depends not only on the inherent nature of the material, but also on the quantity of resources used to produce a specific unit of measurement (kg, m3, m2, ml, etc.). Thus in many cases itcould be validly arguedthat with less weight and smaller size, there will be fewer global emissions of greenhouse effect gases and less contamination. For the research carried out in this thesis prototypes such as the CML (3D Module) and CIL (panelled and elements) have been used as valid study references, becauseboth systems are comparablefor the purpose of analysing the energy benefits of closure materials. So as to reach a basic conclusion in this doctoral thesis - that sets out to demonstrate the technological/industrial viability of the combination of energy efficiency and lightweight modular construction - the departure point is the study of the state of the technique (from the selection of materials and the possible processes of industrialisation in manufacture, to their use on site, functioning and use, respecting the concepts of zero consumption, zero emissions of carbon and Energy Plus). Moreover, with the state of the technique identifying the current situation, tests and practices have been carried out with a natural scale prototype and test cells so as to verify the behaviour of the composite elements of these in certain climatic conditions. These types of result are contrasted with those obtained through computer simulation based on the same parameters and done, principally, using simplified methods of calculation, validated by institutions competent in energy efficiency in Spanish building and in line with the rules in force.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Elucidating the genetic basis of human phenotypes is a major goal of contemporary geneticists. Logically, two fundamental and contrasting approaches are available, one that begins with a phenotype and concludes with the identification of a responsible gene or genes; the other that begins with a gene and works toward identifying one or more phenotypes resulting from allelic variation of it. This paper provides a conceptual overview of phenotype-based vs. gene-based procedures with emphasis on gene-based methods. A key feature of a gene-based approach is that laboratory effort first is devoted to developing an assay for mutations in the gene under regard; the assay then is applied to the evaluation of large numbers of unrelated individuals with a variety of phenotypes that are deemed potentially resulting from alleles at the gene. No effort is directed toward chromosomally mapping the loci responsible for the phenotypes scanned. Example is made of my laboratory’s successful use of a gene-based approach to identify genes causing hereditary diseases of the retina such as retinitis pigmentosa. Reductions in the cost and improvements in the speed of scanning individuals for DNA sequence anomalies may make a gene-based approach an efficient alternative to phenotype-based approaches to correlating genes with phenotypes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Il lavoro presentato in questa tesi di Dottorato è incentrato sullo sviluppo di strategie analitiche innovative basate sulla sensoristica e su tecniche di spettrometria di massa in ambito biologico e della sicurezza alimentare. Il primo capitolo tratta lo studio di aspetti metodologici ed applicativi di procedure sensoristiche per l’identificazione e la determinazione di biomarkers associati alla malattia celiaca. In tale ambito, sono stati sviluppati due immunosensori, uno a trasduzione piezoelettrica e uno a trasduzione amperometrica, per la rivelazione di anticorpi anti-transglutaminasi tissutale associati a questa malattia. L’innovazione di questi dispositivi riguarda l’immobilizzazione dell’enzima tTG nella conformazione aperta (Open-tTG), che è stato dimostrato essere quella principalmente coinvolta nella patogenesi. Sulla base dei risultati ottenuti, entrambi i sistemi sviluppati si sono dimostrati una valida alternativa ai test di screening attualmente in uso per la diagnosi della celiachia. Rimanendo sempre nel contesto della malattia celiaca, ulteriore ricerca oggetto di questa tesi di Dottorato, ha riguardato lo sviluppo di metodi affidabili per il controllo di prodotti “gluten-free”. Il secondo capitolo tratta lo sviluppo di un metodo di spettrometria di massa e di un immunosensore competitivo per la rivelazione di prolammine in alimenti “gluten-free”. E’ stato sviluppato un metodo LC-ESI-MS/MS basato su un’analisi target con modalità di acquisizione del segnale selected reaction monitoring per l’identificazione di glutine in diversi cereali potenzialmente tossici per i celiaci. Inoltre ci si è focalizzati su un immunosensore competitivo per la rivelazione di gliadina, come metodo di screening rapido di farine. Entrambi i sistemi sono stati ottimizzati impiegando miscele di farina di riso addizionata di gliadina, avenine, ordeine e secaline nel caso del sistema LC-MS/MS e con sola gliadina nel caso del sensore. Infine i sistemi analitici sono stati validati analizzando sia materie prime (farine) che alimenti (biscotti, pasta, pane, etc.). L’approccio sviluppato in spettrometria di massa apre la strada alla possibilità di sviluppare un test di screening multiplo per la valutazione della sicurezza di prodotti dichiarati “gluten-free”, mentre ulteriori studi dovranno essere svolti per ricercare condizioni di estrazione compatibili con l’immunosaggio competitivo, per ora applicabile solo all’analisi di farine estratte con etanolo. Terzo capitolo di questa tesi riguarda lo sviluppo di nuovi metodi per la rivelazione di HPV, Chlamydia e Gonorrhoeae in fluidi biologici. Si è scelto un substrato costituito da strips di carta in quanto possono costituire una valida piattaforma di rivelazione, offrendo vantaggi grazie al basso costo, alla possibilità di generare dispositivi portatili e di poter visualizzare il risultato visivamente senza la necessità di strumentazioni. La metodologia sviluppata è molto semplice, non prevede l’uso di strumentazione complessa e si basa sull’uso della isothermal rolling-circle amplification per l’amplificazione del target. Inoltre, di fondamentale importanza, è l’utilizzo di nanoparticelle colorate che, essendo state funzionalizzate con una sequenza di DNA complementare al target amplificato derivante dalla RCA, ne permettono la rivelazione a occhio nudo mediante l’uso di filtri di carta. Queste strips sono state testate su campioni reali permettendo una discriminazione tra campioni positivi e negativi in tempi rapidi (10-15 minuti), aprendo una nuova via verso nuovi test altamente competitivi con quelli attualmente sul mercato.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Plane model extraction from three-dimensional point clouds is a necessary step in many different applications such as planar object reconstruction, indoor mapping and indoor localization. Different RANdom SAmple Consensus (RANSAC)-based methods have been proposed for this purpose in recent years. In this study, we propose a novel method-based on RANSAC called Multiplane Model Estimation, which can estimate multiple plane models simultaneously from a noisy point cloud using the knowledge extracted from a scene (or an object) in order to reconstruct it accurately. This method comprises two steps: first, it clusters the data into planar faces that preserve some constraints defined by knowledge related to the object (e.g., the angles between faces); and second, the models of the planes are estimated based on these data using a novel multi-constraint RANSAC. We performed experiments in the clustering and RANSAC stages, which showed that the proposed method performed better than state-of-the-art methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we investigate a Bayesian procedure for the estimation of a flexible generalised distribution, notably the MacGillivray adaptation of the g-and-κ distribution. This distribution, described through its inverse cdf or quantile function, generalises the standard normal through extra parameters which together describe skewness and kurtosis. The standard quantile-based methods for estimating the parameters of generalised distributions are often arbitrary and do not rely on computation of the likelihood. MCMC, however, provides a simulation-based alternative for obtaining the maximum likelihood estimates of parameters of these distributions or for deriving posterior estimates of the parameters through a Bayesian framework. In this paper we adopt the latter approach, The proposed methodology is illustrated through an application in which the parameter of interest is slightly skewed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In a deregulated electricity market, optimizing dispatch capacity and transmission capacity are among the core concerns of market operators. Many market operators have capitalized on linear programming (LP) based methods to perform market dispatch operation in order to explore the computational efficiency of LP. In this paper, the search capability of genetic algorithms (GAs) is utilized to solve the market dispatch problem. The GA model is able to solve pool based capacity dispatch, while optimizing the interconnector transmission capacity. Case studies and corresponding analyses are performed to demonstrate the efficiency of the GA model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a new differential evolution (DE) based power system optimal available transfer capability (ATC) assessment is presented. Power system total transfer capability (TTC) is traditionally solved by the repeated power flow (RPF) method and the continuation power flow (CPF) method. These methods are based on the assumption that the productions of the source area generators are increased in identical proportion to balance the load increment in the sink area. A new approach based on DE algorithm to generate optimal dispatch both in source area generators and sink area loads is proposed in this paper. This new method can compute ATC between two areas with significant improvement in accuracy compared with the traditional RPF and CPF based methods. A case study using a 30 bus system is given to verify the efficiency and effectiveness of this new DE based ATC optimization approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes experiments conducted in order to simultaneously tune 15 joints of a humanoid robot. Two Genetic Algorithm (GA) based tuning methods were developed and compared against a hand-tuned solution. The system was tuned in order to minimise tracking error while at the same time achieve smooth joint motion. Joint smoothness is crucial for the accurate calculation of online ZMP estimation, a prerequisite for a closedloop dynamically stable humanoid walking gait. Results in both simulation and on a real robot are presented, demonstrating the superior smoothness performance of the GA based methods.