894 resultados para Non Standard Analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

La presente investigación se inicia planteando el objetivo de identificar los parámetros geométricos que son exclusivos del proceso de generación de la Forma y relacionarlos con los invariantes relacionados con la Fabricación digital aplicada a la Arquitectura. Con ello se pretende recuperar la geometría como herramienta principal del proceso de Proyecto ampliando su ámbito de actuación al encontrar una relación con los procesos de fabricación digital. El primer capítulo describe los antecedentes y contexto histórico centrándose especialmente en la influencia de la capacidad de definir geometrías complejas digitalmente mediante la aplicación de algoritmos. En los primeros ejemplos la aproximación del Arquitecto a proyectos con geometrías complejas no euclídeas aún se emplea sin precisión en la comunicación de la geometría ideada para su puesta en obra. Las técnicas constructivas obligan a asumir una tolerancia de desviación entre proyecto y obra y la previsión del comportamiento de esa geometría no permite asegurar su comportamiento final. No será hasta la introducción de herramientas CAD en el proceso de ideación arquitectónica cuando el Arquitecto se capacite para generar geometrías no representables de forma analógica. Sin embargo, la imposibilidad de trasladar la geometría proyectada a la praxis constructiva impedirá la plasmación de un proceso completo, salvo en las contadas ocasiones que se recogen en este texto. “El análisis cronológico de las referencias establece como aspecto esencial para la construcción de geometrías complejas la capacidad primero para definir y comunicar de forma precisa e inequívoca la geometría y después la capacidad de analizar el desempeño prestacional de dicha propuesta geométrica”. La presente investigación se inicia planteando el objetivo de identificar los parámetros geométricos que son exclusivos del proceso de generación de la Forma y relacionarlos con los invariantes relacionados con la Fabricación digital aplicada a la Arquitectura. Con ello se pretende recuperar la geometría como herramienta principal del proceso de Proyecto ampliando su ámbito de actuación al encontrar una relación con los procesos de fabricación digital. El primer capítulo describe los antecedentes y contexto histórico centrándose especialmente en la influencia de la capacidad de definir geometrías complejas digitalmente mediante la aplicación de algoritmos. En los primeros ejemplos la aproximación del Arquitecto a proyectos con geometrías complejas no euclídeas aún se emplea sin precisión en la comunicación de la geometría ideada para su puesta en obra. Las técnicas constructivas obligan a asumir una tolerancia de desviación entre proyecto y obra y la previsión del comportamiento de esa geometría no permite asegurar su comportamiento final. No será hasta la introducción de herramientas CAD en el proceso de ideación arquitectónica cuando el Arquitecto se capacite para generar geometrías no representables de forma analógica. Sin embargo, la imposibilidad de trasladar la geometría proyectada a la praxis constructiva impedirá la plasmación de un proceso completo, salvo en las contadas ocasiones que se recogen en este texto. “El análisis cronológico de las referencias establece como aspecto esencial para la construcción de geometrías complejas la capacidad primero para definir y comunicar de forma precisa e inequívoca la geometría y después la capacidad de analizar el desempeño prestacional de dicha propuesta geométrica”. Establecida la primera conclusión, el capítulo de contexto histórico continúa enfocándose sobre la aplicación de las técnicas digitales en el Proceso de proyecto primero, y en la puesta en obra después. Los casos de estudio identifican claramente como un punto de inflexión para la generación de formas complejas mediante un software CAD el Museo Guggenheim de Bilbao en 1992. El motivo esencial para elegir este proyecto como el primer proyecto digital es el uso de la herramienta de definición digital de la geometría para su reproducción inequívoca en obra. “La revolución digital ha aportado al Arquitecto la posibilidad de abandonar las tipologías arquitectónicas basados en restricciones geométricas-constructivas. La aplicación de técnicas de fabricación digital ha permitido la capacidad de diseñar con independencia del sistema constructivo y libertad formal. En este nuevo contexto las prestaciones suponen los nuevos límites conceptuales, ya que el acceso y disposición de la información del comportamiento de las alternativas que cada geometría conlleva demanda del Arquitecto la jerarquización de los objetivos y la formulación en un conjunto coherente de parámetros”. Los proyectos que emplean herramientas digitales para la resolución de las distintas etapas del proceso proyectual se verán incrementados de forma exponencial desde 1992 hasta nuestros días. A pesar del importante auge de las técnicas de diseño asistido por ordenador el principal desafío sigue siendo la vinculación de las geometrías y materiales propuestos con las capacidades de las técnicas de manufactura y puesta en obra. El proceso de diseño para fabricación en un entorno digital es una tecnología madura en otras industrias como la aeroespacial o la automovilística, incluso la de productos de consumo y decoración, sin embargo en el sector de Construcción es un sistema inmaduro e inconexo. Las particularidades de la industria de la construcción aún no han sido abordadas en su totalidad y las propuestas de investigación realizadas en este ámbito se han centrado hasta 2015 en partes del proceso y no en el proceso total. “El principal obstáculo para la estandarización e implantación globalizada de un proceso digital desde el origen de la forma hasta la construcción es la inexistencia de un protocolo integrado que integre las limitaciones de fabricación, económicas y de puesta en obra junto a la evaluación de desempeño prestacional durante la fases iniciales de proyecto”. En el capítulo número 3 se estudian los distintos procesos de generación de la forma. Se propone una definición específica para el ámbito de la investigación de “forma” en el entendemos que se incluye la envolvente exterior y el conjunto organizativo de espacios interiores conectados. Por lo tanto no es excluyente del interior. El objetivo de este estudio es analizar y clasificar los procesos para la generación digital de formas en los distintos proyectos seleccionados como emblemáticos de cada tipología. Se concluye que la aproximación a este proceso es muy variada y compleja, con aplicación segregada y descoordinada entre los distintos agentes que han intervenir. En un proceso de generación formal analógico los parámetros que intervienen son en parte conscientes y en parte inconscientes o aprendidos. El Arquitecto sólo tiene control sobre la parte consciente de los parámetros a integrar en el diseño, de acuerdo a sus conocimientos y capacidades será capaz de manejar un número limitado de parámetros. La parte aprendida permanece en el inconsciente y dirige el proceso analógico, aportando prejuicios estéticos incorporados durante el proceso formativo y propio del entorno cultural. “El empleo de herramientas digitales basadas en la evaluación prestacional durante el proceso de selección formal permite al Arquitecto conocer “en tiempo real” el desempeño en el conjunto de prestaciones evaluadoras del conjunto de alternativas geométricas a la propuesta previamente definida por la intuición arquitectónica. El proceso definido no persigue identificar una solución óptima sino asistir al Arquitecto en el proceso de generación de la forma mediante la evaluación continua de los vectores direccionales más idóneos que el procedimiento generativo plantea”. La definición de complejidad en generación y producción de formas en relación con el proceso de diseño digital paramétrico global o integrado, es esencial para establecer un protocolo que optimice su gestión. “Se propone como definición de complejidad como factor resultante de multiplicar el número de agentes intervinientes por el número de parámetros e interacciones comunes que intervienen en el proceso de generación de la forma, dividido por la complejidad de intercambio de información digital desde el origen hasta la fase de fabricación y construcción”. Una vez analizados los procesos de generación digital de Arquitectura se propone identificar los parámetros geométricos que definen el proceso de Diseño digital, entendiendose por Diseño el proceso que engloba desde la proposición de una forma inicial basada en la intuición del Arquitecto, la generación y evaluación de variantes y posterior definición digital para producción, tanto de un objeto, un sistema o de la totalidad del Proyecto. En la actualidad el proceso de Diseño es discontinuo y lineal organizandose los parámetros por disciplinas en las que está estructurada las atribuciones profesionales en la industria de la construcción. Para simplificar la identificación y listado se han agrupado siguiendo estos grupos de conocimiento. Entendemos parametros invariables aquellos que son independientes de Tipologías arquitectónicas o que dependen del mismo proceso de generación de la Forma. “El listado de los parámetros que intervienen en un proceso de generación formal es una abstracción de una realidad compleja. La parametrización de las decisiones que intervienen en la selección de una forma determinada mediante “well defined problems” es imposible. El proceso que esta tesis describe entiende esta condición como un elemento que pone en valor el propio procedimiento generativo por la riqueza que la subjetividad que el equipo de diseño aporta”. La segunda parte esencial de esta investigación pretende extraer las restricciones propias del estado del arte de la fabricación digital para posteriormente incorporarlos en los procesos digitales de definición de la Forma arquitectónica. “La integración de las restricciones derivadas de las técnicas de fabricación y construcción digitales en el proceso de generación de formas desde el ámbito de la Arquitectura debe referirse a los condicionantes geométricos asociados a cada sistema constructivo, material y técnica de fabricación. La geometría es además el vínculo que permite asociar el conjunto de parámetros prestacionales seleccionados para un Proyecto con los sistemas de fabricación digital”. A estos condicionantes geométricos obtenidos del análisis de cada sistema de fabricación digital se les ha denominado “invariantes geométricos”. Bajo este término se engloban tanto límites dimensionales de fabricación, como materiales compatibles, tolerancias de manufactura e instalación y cualidades prestacionales asociadas. El objetivo de esta propuesta es emplear la geometría, herramienta fundamental y propia del Arquitecto, como nexo de unión entre el conjunto complejo y heterogéneo de parámetros previamente listados y analizados. Para ello se han simplificado en tablas específicas para cada parámetro prestacional los condicionantes geométricos que se derivan de los Sistemas de fabricación digital compatibles (ver apéndice 1). El estudio y evaluación de las capacidades y objetivos de las distintas plataformas de software disponibles y de las experiencias profesionales evaluadas en los proyectos presentados, permiten concluir que la propuesta de plataforma digital de diseño integral multi-paramétrico de formas arquitectónicas requiere de un protocolo de interoperatibilidad específico aún no universalmente establecido. Actualmente el enfoque de la estrategia para normalizar y universalizar el contexto normativo para regular la interoperatibilidad se centra en figura del gestor denominado “BIM manager”. Las atribuciones y roles de esta figura se enfocan a la gestión del continente y no del contenido (Definición de los formatos de intercambio, niveles de desarrollo (LOD) de los componentes o conjuntos constructivos, detección de interferencias y documentación del propio modelo). Siendo este ámbito un desarrollo necesario para la propuesta de universalización del sistema de diseño para fabricación digital integrado, la presente investigación aporta un organigrama y protocolo asociado. El protocolo: 1. Establece la responsabilidad de identificar y definir la Información que debe determinar el proceso de generación y desarrollo de la forma arquitectónica. 2. Define la forma digital apropiada para generar la geometría del Proyecto, incluyendo la precisión necesaria para cada componente y el nivel de detalle necesario para su exportación inequívoca al proceso de fabricación. 3. Define el tempo de cada etapa de diseño identificando un nivel de detalle acorde. 4. Acopla este organigrama dentro de las estructuras nuevas que se proponen en un entorno BIM para asegurar que no se producen solapes o vacíos con las atribuciones que se identifican para el BIM Manager. “El Arquitecto debe dirigir el protocolo de generación coordinada con los sistemas de producción digital para conseguir que la integración completa. El protocolo debe asistir al proceso de generación de forma mediante la evaluación del desempeño prestacional de cada variante en tiempo real. La comunicación entre herramientas digitales es esencial para permitir una ágil transmisión de información. Es necesario establecer un protocolo adaptado a los objetivos y las necesidades operativas de cada proyecto ya que la estandarización de un protocolo único no es posible”. Una decisión estratégica a la hora de planificar una plataforma de diseño digital común es establecer si vamos a optar por un Modelo digital único o diversos Modelos digitales federados. Cada uno de los modos de trabajo tiene fortalezas y debilidades, no obstante en el ámbito de investigación se ha concluido que un proceso integrado de Diseño que incorpore la evaluación prestacional y conceptual definida en el Capítulo 3, requiere necesariamente de varios modelos de software distintos que han de relacionarse entre sí mediante un protocolo de comunicación automatizado. Una plataforma basada en un modelo federado consiste en establecer un protocolo de comunicación entre los programas informáticos empleados por cada disciplina. En este modelo de operación cada equipo de diseño debe establecer las bases de comunicación en función del número y tipo de programas y procesos digitales a emplear. En esta investigación se propone un protocolo basado en los estándares de intercambio de información que estructura cualquier proceso de generación de forma paramétrico “La investigación establece el empleo de algoritmos evolutivos como el sistema actual óptimo para desarrollar un proceso de generación de formas basadas en la integración y coordinación de invariantes geométricos derivados de un conjunto de objetivos prestacionales y constructivos. No obstante, para la aplicación en el caso práctico realizado se ha podido verificar que la evaluación del desempeño aún no puede realizarse en una única herramienta y por lo tanto el proceso de selección de las variantes genéticas óptimas ha de ejecutarse de forma manual y acumulativa. El proceso debe realizarse de manera federada para la selección evolutiva de los invariantes geométricos dimensionales”. La evaluación del protocolo de integración y los condicionantes geométricos obtenidos como parámetros geométricos que controlan las posibles formas compatibles se realiza mediante su aplicación en un caso práctico. El ejercicio simula la colaboración multidisciplinar con modelos federados de plataformas distintas. La elección del tamaño y complejidad constructiva del proyecto se ha modulado para poder alcanzar un desarrollo completo de cada uno de los parámetros prestacionales seleccionados. Continuando con el mismo objetivo propuesto para los parámetros prestacionales, la tipología constructiva-estructural seleccionada para el ejercicio permite la aplicación la totalidad de invariantes geométricos asociados. El objetivo de este caso práctico es evaluar la capacidad alterar la forma inicialmente propuesta mediante la evaluación del desempeño prestacional de conjunto de variantes geométricas generadas a partir de un parámetro dimensional determinado. Para que este proceso tenga sentido, cada una de las variantes debe ser previamente validada conforme a las limitaciones geométricas propias de cada sistema de fabricación y montaje previstos. El interés de las conclusiones obtenidas es la identificación de una variante geométrica distante a la solución simétrica inicialmente como la solución óptima para el conjunto de parámetros seleccionados. Al tiempo se ha comprobado como la participación de un conjunto de parámetros multi-disciplinares que representan la realidad compleja de los objetivos arquitectónicos favorecen la aparición de variaciones genéticas con prestaciones mejoradas a la intuición inicial. “La herencias tipológicas suponen un límite para la imaginación de variantes formales al proceso de ideación arquitectónica. El ejercicio realizado demuestra que incluso en casos donde aparentemente la solución óptima aparenta ser obvia una variante aleatoria puede mejorar su desempeño global. La posibilidad de conocer las condiciones geométricas de las técnicas de fabricación digital compatibles con el conjunto de parámetros seleccionados por el Arquitecto para dirigir el proceso asegura que los resultados del algoritmo evolutivo empleado sean constructivamente viables. La mejora de imaginación humana con la aportación de geometrías realmente construibles supone el objetivo último de esta tesis”. ABSTRACT Architectural form generation process is shifting from analogical to digital. Digital technology has changed the way we design empowering Architects and Engineers to precisely define any complex geometry envisioned. At the same time, the construction industry, following aeronautical and automotive industries, is implementing digital manufacturing techniques to improve efficiency and quality. Consequently construction complexity will no longer be related to geometry complexity and it is associated to coordination with digital manufacturing capacities. Unfortunately it is agreed that non-standard geometries, even when proposed with performance optimization criteria, are only suitable for projects with non-restricted budgets. Furthemore, the lack of coordinated exportation protocol and geometry management between design and construction is avoiding the globalization of emergence process in built projects Present research first objective is to identify exclusive form-generation parameters related to digital manufacturing geometrical restraints. The intention was to use geometry as the form-generation tool and integrate the digital manufacturing capacities at first stages of the project. The first chapter of this text describes the investigation historical context focusing on the influence between accurate geometry definition at non-standard forms and its construction. At first examples of non-Euclidean geometries built the communication between design and construction were based on analogical partial and imprecise documentation. Deficient communication leads to geometry adaptation on site leaving the final form uncontrolled by the Architect. Computer Aided Design enable Architects to define univocally complex geometries that previously where impossible to communicate. “The univocally definition of the Form, and communication between design and construction is essential for complex geometry Projects”. The second chapter is focused on digital technologies application in form finding process and site construction. The case studies selected identifies a clear inflexion node at 1992 with the Guggenheim Museum in Bilbao. The singularity of this project was the use of Aeronautics software to define digitally the external envelope complex geometry to enable the contractor to build it. “The digital revolution has given the Architect the capacity to design buildings beyond the architectural archetypes driven by geometric-constructive limitations. The application of digital manufacturing techniques has enabled a free-form construction without geometrical limitations. In this new context performance shall be the responsible to set new conceptual boundaries, since the behavior of each possible geometry can be compare and analyze beforehand. The role of the Architect is to prioritize the performance and architectural objectives of each project in a complete and coherent set of parameters”. Projects using digital tools for solving various stages of the design process were increased exponentially since 1992 until today. Despite the significant rise of the techniques of computer-aided design the main challenge remains linking geometries and materials proposed at each design with the capabilities of digital manufacturing techniques. Design for manufacturing in a digital environment is a mature technology in other industries such as aerospace and automotive, including consumer products and decoration, but in the construction sector is an immature and disjointed system. The peculiarities of the construction industry have not yet been addressed in its entirety and research proposals made in this area until 2015 have focused in separate parts of the process and not the total process. “The main obstacle to global standardization and implementation of a complete digital process from the form-finding to construction site is the lack of an integrated protocol that integrates manufacturing, economic and commissioning limitations, together with the performance evaluation of each possible form”. The different form generation processes are studied at chapter number 3. At the introduction of this chapter there is a specific definition of "form" for the research field. Form is identified with the outer envelope geometry, including the organizational set of connected indoor spaces connected to it. Therefore it is not exclusive of the interior. The aim of this study is to analyze and classify the main digital form generation processes using different selected projects as emblematic of each type. The approach to this process is complex, with segregated and uncoordinated different actors have to intervene application. In an analogical form-generation process parameters involved are partly conscious and partly unconscious or learned. The architect has control only over limited part of the parameters to be integrated into the design, according to their knowledge and. There is also a learned aesthetical prejudice that leads the form generation process to a specific geometry leaving the performance and optimization criteria apart from the decision making process. “Using performance evaluation digital tools during form finding process provides real-time comparative information to the Architect enabling geometry selection based on its performance. The generative form generation process described at this document does not ambition to identify the optimum geometry for each set of parameters. The objective is to provide quick information at each generation of what direction is most favorable for the performance parameters selected”. Manufacturing complexity definition in relation to a global and integral process of digital design for manufacture is essential for establishing an efficient managing protocol. “The definition of complexity associated to design for production in Architecture is proposed as the factor between number of different agents involved in the process by the number of interactions required between them, divided by the percentage of the interchange of information that is standardized and proof of information loss”. Design in architecture is a multi-objective process by definition. Therefore, addressing generation process linked to a set of non-coherent parameters requires the selection of adequate generative algorithm and the interaction of the architect. During the second half of the twentieth century and early twenty-first century it have been developed various mathematical algorithms for multi-parametric digital design. Heuristic algorithms are the most adequate algorithms for architectural projects due to its nature. The advantage of such algorithms is the ability to efficiently handle large scale optimization cases where a large number of design objectives and variables are involved. These generative processes do not pursue the optimum solution, in fact it will be impossible to proof with such algorithm. This is not a problem in architectural design where the final goal is to guide the form finding process towards a better performance within the initial direction provided by the architect. This research has focused on genetic algorithms due to its capacity to generate geometric alternatives in multiple directions and evaluate the fitness against a set of parameters specified in a single process. "Any protocol seeks to achieve standardization. The design to manufacturing protocol aims to provide a coordinated and coherent form generation process between a set of design parameters and the geometrical requirements of manufacturing technique. The protocol also provides an information exchange environment where there is a communication path and the level of information is ensured. The research is focused on the process because it is considered that each project will have its own singularities and parameters but the process will stay the same. Again the development of a specific tool is not a goal for the research, the intention is to provide an open source protocol that is valid for any set of tools”. Once the digital generation processes are being analized and classified, the next step is to identify the geometric parameters that define the digital design process. The definition of design process is including from the initial shape proposal based on the intuition of the architect to the generation, evaluation, selection and production of alternatives, both of an object , system or of the entire project . The current design process in Architecture is discontinuous and linear, dividing the process in disciplines in which the construction industry is structured. The proposal is to unify all relevant parameters in one process. The parameters are listed in groups of knowledge for internal classification but the matrix used for parameter relationship determination are combined. “A multi-parameter determination of the form-finding process is the integration all the measurable decisions laying behind Architect intuition. It is not possible to formulate and solve with an algorithm the design in Architecture. It is not the intention to do so with the proposal of this research. The process aims to integrate in one open protocol a selection of parameters by using geometry as common language. There is no optimum solution for any step of the process, the outcome is an evaluation of performance of all the form variations to assist the Architect for the selection of the preferable solution for the project”. The research follows with the geometrical restrictions of today Digital manufacturing techniques. Once determined it has been integrated in the form-finding process. “Digital manufacturing techniques are integrated in the form-finding process using geometry as common language. Geometric restraints define the boundary for performance parametric form-finding process. Geometrical limitations are classified by material and constructive system”. Choose between one digital model or several federate models is a strategic decision at planning a digital design for manufacturing protocol. Each one of the working models have strengths and weakens, nevertheless for the research purposes federated models are required to manage the different performance evaluation software platforms. A protocol based on federated models shall establish a communication process between software platforms and consultants. The manager shall integrate each discipline requirements defining the communication basis. The proposed protocol is based on standards on information exchange with singularities of the digital manufacturing industry. “The research concludes evolutionary algorithms as current best system to develop a generative form finding process based on the integration and coordination of a set of performance and constructive objectives. However, for application in professional practice and standardize it, the performance evaluation cannot be done in only one tool and therefore the selection of optimal genetic variants must be run in several iterations with a cumulative result. Consequently, the evaluation process within the geometrical restraints shall be carried out with federated models coordinated following the information exchange protocol”. The integration protocol and geometric constraints evaluation is done by applying in a practical case study. The exercise simulates multidisciplinary collaboration across software platforms with federated models. The choice of size and construction complexity of the project has been modulated to achieve the full development of each of the parameters selected. Continuing with the same objective proposed for the performance parameters the constructive and structural type selected for the exercise allows the application all geometric invariants associated to the set of parameters selected. The main goal of the case study is to proof the capacity of the manufacturing integrated form finding process to generate geometric alternatives to initial form with performance improved and following the restrictions determined by the compatible digital manufacturing technologies. The process is to be divided in consecutive analysis each one limited by the geometrical conditions and integrated in a overall evaluation. The interest of this process is the result of a non-intuitive form that performs better than a double symmetrical form. The second conclusion is that one parameter evaluation alone will not justify the exploration of complex geometry variations, but when there is a set of parameters with multidisciplinary approach then the less obvious solution emerge as the better performing form. “Architectural typologies impose limitation for Architects capacity to imagine formal variations. The case study and the research conclusions proof that even in situations where the intuitive solution apparently is the optimum solution, random variations can perform better when integrating all parameters evaluation. The capacity of foreseing the geometrical properties linking each design parameter with compatible manufacturing technologies ensure the result of the form-finding process to be constructively viable. Finally, the propose of a complete process where the geometry alternatives are generated beyond the Architect intuition and performance evaluated by a set of parameters previously selected and coordinated with the manufacturing requirements is the final objective of the Thesis”.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper explores the effects of non-standard monetary policies on international yield relationships. Based on a descriptive analysis of international long-term yields, we find evidence that long-term rates followed a global downward trend prior to as well as during the financial crisis. Comparing interest rate developments in the US and the eurozone, it is difficult to detect a distinct impact of the first round of the Fed’s quantitative easing programme (QE1) on US interest rates for which the global environment – the global downward trend in interest rates – does not account. Motivated by these findings, we analyse the impact of the Fed’s QE1 programme on the stability of the US-euro long-term interest rate relationship by using a CVAR (cointegrated vector autoregressive) model and, in particular, recursive estimation methods. Using data gathered between 2002 and 2014, we find limited evidence that QE1 caused the break-up or destabilised the transatlantic interest rate relationship. Taking global interest rate developments into account, we thus find no significant evidence that QE had any independent, distinct impact on US interest rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Indigenous Australians are at high risk for cardiovascular disease and type 2 diabetes. Carotid artery intimal medial thickness (CIMT) and brachial artery flow-mediated vasodilation (FMD) are ultrasound imaging based surrogate markers of cardiovascular risk. This study examines the relative contributions of traditional cardiovascular risk factors on CIMT and FMD in adult Indigenous Australians with and without type 2 diabetes mellitus. Method: One hundred and nineteen Indigenous Australians were recruited. Physical and biochemical markers of cardiovascular risk, together with CIMT and FMD were meausred for all subjects. Results: Fifty-three Indigenous Australians subjects (45%) had type 2 diabetes mellitus. There was a significantly greater mean CIMT in diabetic versus non-diabetic subjects (p = 0.049). In the non-diabetic group with non-parametric analyses, there were significant correlations between CIMT and: age (r = 0.64, p < 0.001), systolic blood pressure (r = 0.47, p < 0.001) and non-smokers (r = -0.30, p = 0.018). In the diabetic group, non-parametric analysis showed correlations between CIMT, age (r = 0.36, p = 0.009) and duration of diabetes (r = 0.30, p = 0.035) only. Adjusting forage, sex, smoking and history of cardiovascular disease, Hb(A1c) became the sole significant correlate of CIMT (r = 0.35,p = 0.01) in the diabetic group. In non-parametric analysis, age was the sole significant correlate of FMD (r = -0.31,p = 0.013), and only in non-diabetic subjects. Linear regression analysis showed significant associations between CIMT and age (t = 4.6,p < 0.001), systolic blood pressure (t = 2.6, p = 0.010) and Hb(A1c) (t = 2.6, p = 0.012), smoking (t = 2.1, p = 0.04) and fasting LDL-cholesterol (t = 2.1, p = 0.04). There were no significant associations between FMD and examined cardiovascular risk factors with linear regression analysis Conclusions: CIMT appears to be a useful surrogate marker of cardiovascular risk in this sample of Indigenous Australian subjects, correlating better than FMD with established cardiovascular risk factors. A lifestyle intervention programme may alleviate the burden of cardiovascular disease in Indigenous Australians by reducing central obesity, lowering blood pressure, correcting dyslipidaemia and improving glycaemic control. CIMT may prove to be a useful tool to assess efficacy of such an intervention programme. (c) 2004 Elsevier Ireland Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The mammalian transcriptome harbours shadowy entities that resist classification and analysis. In analogy with pseudogenes, we define pseudo-messenger RNA to be RNA molecules that resemble protein- coding mRNA, but cannot encode full-length proteins owing to disruptions of the reading frame. Using a rigorous computational pipeline, which rules out sequencing errors, we identify 10,679 pseudo - messenger RNAs ( approximately half of which are transposonassociated) among the 102,801 FANTOM3 mouse cDNAs: just over 10% of the FANTOM3 transcriptome. These comprise not only transcribed pseudogenes, but also disrupted splice variants of otherwise protein- coding genes. Some may encode truncated proteins, only a minority of which appear subject to nonsense- mediated decay. The presence of an excess of transcripts whose only disruptions are opal stop codons suggests that there are more selenoproteins than currently estimated. We also describe compensatory frameshifts, where a segment of the gene has changed frame but remains translatable. In summary, we survey a large class of non- standard but potentially functional transcripts that are likely to encode genetic information and effect biological processes in novel ways. Many of these transcripts do not correspond cleanly to any identifiable object in the genome, implying fundamental limits to the goal of annotating all functional elements at the genome sequence level.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study examined the use of non-standard parameters to investigate the visual field, with particular reference to the detection of glaucomatous visual field loss. Evaluation of the new perimetric strategy for threshold estimation - FASTPAC, demonstrated a reduction in the examination time of normals compared to the standard strategy. Despite an increased within-test variability the FASTPAC strategy produced a similar mean sensitivity to the standard strategy, reducing the effects of patient fatigue. The new technique of Blue-Yellow perimetry was compared to White-White perimetry for the detection of glaucomatous field loss in OHT and POAG. Using a database of normal subjects, confidence limits for normality were constructed to account for the increased between-subject variability with increase in age and eccentricity and for the greater variability of the Blue-Yellow field compared to the White-White field. Effects of individual ocular media absorption had little effect on Blue-Yellow field variability. Total and pattern probability analysis revealed five of 27 OHTs to exhibit Blue-Yellow focal abnormalities; two of these patients subsequently developed White-White loss. Twelve of the 24 POAGs revealed wider and/or deeper Blue-Yellow loss compared with the White-White field. Blue-Yellow perimetry showed good sensitivity and specificity characteristics, however, lack of perimetric experience and the presence of cataract influenced the Blue-Yellow visual field and may confound the interpretation of Blue-Yellow visual field loss. Visual field indices demonstrated a moderate relationship to the structural parameters of the optic nerve head using scanning laser tomography. No abnormalities in Blue-Yellow or Red-Green colour CS was apparent for the OHT patients. A greater vulnerability of the SWS pathway in glaucoma was demonstrated using Blue-Yellow perimetry however predicting which patients may benefit from B-Y perimetric examination is difficult. Furthermore, cataract and the extent of the field loss may limit the extent to which the integrity of the SWS channels can be selectively examined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Citation information: Armstrong RA, Davies LN, Dunne MCM & Gilmartin B. Statistical guidelines for clinical studies of human vision. Ophthalmic Physiol Opt 2011, 31, 123-136. doi: 10.1111/j.1475-1313.2010.00815.x ABSTRACT: Statistical analysis of data can be complex and different statisticians may disagree as to the correct approach leading to conflict between authors, editors, and reviewers. The objective of this article is to provide some statistical advice for contributors to optometric and ophthalmic journals, to provide advice specifically relevant to clinical studies of human vision, and to recommend statistical analyses that could be used in a variety of circumstances. In submitting an article, in which quantitative data are reported, authors should describe clearly the statistical procedures that they have used and to justify each stage of the analysis. This is especially important if more complex or 'non-standard' analyses have been carried out. The article begins with some general comments relating to data analysis concerning sample size and 'power', hypothesis testing, parametric and non-parametric variables, 'bootstrap methods', one and two-tail testing, and the Bonferroni correction. More specific advice is then given with reference to particular statistical procedures that can be used on a variety of types of data. Where relevant, examples of correct statistical practice are given with reference to recently published articles in the optometric and ophthalmic literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We prove that if f is a real valued lower semicontinuous function on a Banach space X and if there exists a C^1, real valued Lipschitz continuous function on X with bounded support and which is not identically equal to zero, then f is Lipschitz continuous of constant K provided all lower subgradients of f are bounded by K. As an application, we give a regularity result of viscosity supersolutions (or subsolutions) of Hamilton-Jacobi equations in infinite dimensions which satisfy a coercive condition. This last result slightly improves some earlier work by G. Barles and H. Ishii.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Loss of coherence with increasing excitation amplitudes and spatial size modulation is a fundamental problem in designing Raman fiber lasers. While it is known that ramping up laser pump power increases the amplitude of stochastic excitations, such higher energy inputs can also lead to a transition from a linearly stable coherent laminar regime to a non-desirable disordered turbulent state. This report presents a new statistical methodology, based on first passage statistics, that classifies lasing regimes in Raman fiber lasers, thereby leading to a fast and highly accurate identification of a strong instability leading to a laminar-turbulent phase transition through a self-consistently defined order parameter. The results have been consistent across a wide range of pump power values, heralding a breakthrough in the non-invasive analysis of fiber laser dynamics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We investigate by means of Monte Carlo simulation and finite-size scaling analysis the critical properties of the three dimensional O (5) non-linear σ model and of the antiferromagnetic RP^(2) model, both of them regularized on a lattice. High accuracy estimates are obtained for the critical exponents, universal dimensionless quantities and critical couplings. It is concluded that both models belong to the same universality class, provided that rather non-standard identifications are made for the momentum-space propagator of the RP^(2) model. We have also investigated the phase diagram of the RP^(2) model extended by a second-neighbor interaction. A rich phase diagram is found, where most of the phase transitions are of the first order.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research on the relationship between reproductive work and women´s life trajectories including the experience of labour migration has mainly focused on the case of relatively young mothers who leave behind, or later re-join, their children. While it is true that most women migrate at a younger age, there are a significant number of cases of men and women who move abroad for labour purposes at a more advanced stage, undertaking a late-career migration. This is still an under-estimated and under-researched sub-field that uncovers a varied range of issues, including the global organization of reproductive work and the employment of migrant women as domestic workers late in their lives. By pooling the findings of two qualitative studies, this article focuses on Peruvian and Ukrainian women who seek employment in Spain and Italy when they are well into their forties, or older. A commonality the two groups of women share is that, independently of their level of education and professional experience, more often than not they end up as domestic and care workers. The article initially discusses the reasons for late-career female migration, taking into consideration the structural and personal determinants that have affected Peruvian and Ukrainian women’s careers in their countries of origin and settlement. After this, the focus is set on the characteristics of domestic employment at later life, on the impact on their current lives, including the transnational family organization, and on future labour and retirement prospects. Apart from an evaluation of objective working and living conditions, we discuss women’s personal impressions of being domestic workers in the context of their occupational experiences and family commitments. In this regard, women report varying levels of personal and professional satisfaction, as well as different patterns of continuity-discontinuity in their work and family lives, and of optimism towards the future. Divergences could be, to some extent, explained by the effect of migrants´ transnational social practices and policies of states.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The non-standard decoding of the CUG codon in Candida cylindracea raises a number of questions about the evolutionary process of this organism and other species Candida clade for which the codon is ambiguous. In order to find some answers we studied the transcriptome of C. cylindracea, comparing its behavior with that of Saccharomyces cerevisiae (standard decoder) and Candida albicans (ambiguous decoder). The transcriptome characterization was performed using RNA-seq. This approach has several advantages over microarrays and its application is booming. TopHat and Cufflinks were the software used to build the protocol that allowed for gene quantification. About 95% of the reads were mapped on the genome. 3693 genes were analyzed, of which 1338 had a non-standard start codon (TTG/CTG) and the percentage of expressed genes was 99.4%. Most genes have intermediate levels of expression, some have little or no expression and a minority is highly expressed. The distribution profile of the CUG between the three species is different, but it can be significantly associated to gene expression levels: genes with fewer CUGs are the most highly expressed. However, CUG content is not related to the conservation level: more and less conserved genes have, on average, an equal number of CUGs. The most conserved genes are the most expressed. The lipase genes corroborate the results obtained for most genes of C. cylindracea since they are very rich in CUGs and nothing conserved. The reduced amount of CUG codons that was observed in highly expressed genes may be due, possibly, to an insufficient number of tRNA genes to cope with more CUGs without compromising translational efficiency. From the enrichment analysis, it was confirmed that the most conserved genes are associated with basic functions such as translation, pathogenesis and metabolism. From this set, genes with more or less CUGs seem to have different functions. The key issues on the evolutionary phenomenon remain unclear. However, the results are consistent with previous observations and shows a variety of conclusions that in future analyzes should be taken into consideration, since it was the first time that such a study was conducted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A fundamental step in understanding the effects of irradiation on metallic uranium and uranium dioxide ceramic fuels, or any material, must start with the nature of radiation damage on the atomic level. The atomic damage displacement results in a multitude of defects that influence the fuel performance. Nuclear reactions are coupled, in that changing one variable will alter others through feedback. In the field of fuel performance modeling, these difficulties are addressed through the use of empirical models rather than models based on first principles. Empirical models can be used as a predictive code through the careful manipulation of input variables for the limited circumstances that are closely tied to the data used to create the model. While empirical models are efficient and give acceptable results, these results are only applicable within the range of the existing data. This narrow window prevents modeling changes in operating conditions that would invalidate the model as the new operating conditions would not be within the calibration data set. This work is part of a larger effort to correct for this modeling deficiency. Uranium dioxide and metallic uranium fuels are analyzed through a kinetic Monte Carlo code (kMC) as part of an overall effort to generate a stochastic and predictive fuel code. The kMC investigations include sensitivity analysis of point defect concentrations, thermal gradients implemented through a temperature variation mesh-grid, and migration energy values. In this work, fission damage is primarily represented through defects on the oxygen anion sublattice. Results were also compared between the various models. Past studies of kMC point defect migration have not adequately addressed non-standard migration events such as clustering and dissociation of vacancies. As such, the General Utility Lattice Program (GULP) code was utilized to generate new migration energies so that additional non-migration events could be included into kMC code in the future for more comprehensive studies. Defect energies were calculated to generate barrier heights for single vacancy migration, clustering and dissociation of two vacancies, and vacancy migration while under the influence of both an additional oxygen and uranium vacancy.