964 resultados para Current generation
Resumo:
Esta tesis analiza los criterios con que fueron proyectadas y construidas las estructuras de hormigón hasta 1973, fecha coincidente con la Instrucción EH-73, que en contenido, formato y planteamiento, consagró la utilización de los criterios modernamente utilizados hasta ahora. Es heredera, además, de las CEB 1970. Esos años marcan el cambio de planteamiento desde la Teoría Clásica hacia los Estados Límite. Los objetivos perseguidos son, sintéticamente: 1) Cubrir un vacío patente en el estudio de la evolución del conocimiento. Hay tratados sobre la historia del hormigón que cubren de manera muy completa el relato de personajes y realizaciones, pero no, al menos de manera suficiente, la evolución del conocimiento. 2) Servir de ayuda a los técnicos de hoy para entender configuraciones estructurales, geometrías, disposiciones de armado, formatos de seguridad, etc, utilizados en el pasado, lo que servirá para la redacción más fundada de dictámenes preliminares sobre estructuras existentes. 3) Ser referencia para la realización de estudios de valoración de la capacidad resistente de construcciones existentes, constituyendo la base de un documento pre-normativo orientado en esa dirección. En efecto, esta tesis pretende ser una ayuda para los ingenieros de hoy que se enfrentan a la necesidad de conservar y reparar estructuras de hormigón armado que forman parte del patrimonio heredado. La gran mayoría de las estructuras, fueron construidas hace más de 40 años, por lo que es preciso conocer los criterios que marcaron su diseño, su cálculo y su construcción. Pretende determinar cuáles eran los límites de agotamiento y por tanto de seguridad, de estructuras dimensionadas con criterios de antaño, analizadas por la metodología de cálculo actual. De este modo, se podrá determinar el resguardo existente “real” de las estructuras dimensionadas y calculadas con criterios “distintos” a los actuales. Conocer el comportamiento de las estructuras construidas con criterios de la Teoría Clásica, según los criterios actuales, permitirá al ingeniero de hoy tratar de la forma más adecuada el abanico de necesidades que se puedan presentar en una estructura existente. Este trabajo se centra en la evolución del conocimiento por lo que no se encuentran incluidos los procesos constructivos. En lo relativo a los criterios de proyecto, hasta mediados del siglo XX, éstos se veían muy influidos por los ensayos y trabajos de autor consiguientes, en los que se basaban los reglamentos de algunos países. Era el caso del reglamento prusiano de 1904, de la Orden Circular francesa de 1906, del Congreso de Lieja de 1930. A partir de la segunda mitad del siglo XX, destacan las aportaciones de ingenieros españoles como es el caso de Alfredo Páez Balaca, Eduardo Torroja y Pedro Jiménez Montoya, entre otros, que permitieron el avance de los criterios de cálculo y de seguridad de las estructuras de hormigón, hasta los que se conocen hoy. El criterio rector del proyecto de las estructuras de hormigón se fundó, como es sabido, en los postulados de la Teoría Clásica, en particular en el “momento crítico”, aquel para el que hormigón y acero alcanzan sus tensiones admisibles y, por tanto, asegura el máximo aprovechamiento de los materiales y sin pretenderlo conscientemente, la máxima ductilidad. Si el momento solicitante es mayor que el crítico, se dispone de armadura en compresión. Tras el estudio de muchas de las estructuras existentes de la época por el autor de esta tesis, incluyendo entre ellas las Colecciones Oficiales de Puentes de Juan Manuel de Zafra, Eugenio Ribera y Carlos Fernández Casado, se concluye que la definición geométrica de las mismas no se corresponde exactamente con la resultante del momento crítico, dado que como ahora resultaba necesario armonizar los criterios de armado a nivel sección con la organización de la ferralla a lo largo de los diferentes elementos estructurales. Los parámetros de cálculo, resistencias de los materiales y formatos de seguridad, fueron evolucionando con los años. Se fueron conociendo mejor las prestaciones de los materiales, se fue enriqueciendo la experiencia de los propios procesos constructivos y, en menor medida, de las acciones solicitantes y, consiguientemente, acotándose las incertidumbres asociadas lo cual permitió ir ajustando los coeficientes de seguridad a emplear en el cálculo. Por ejemplo, para el hormigón se empleaba un coeficiente de seguridad igual a 4 a finales del siglo XIX, que evolucionó a 3,57 tras la publicación de la Orden Circular francesa de 1906, y a 3, tras la Instrucción española de 1939. En el caso del acero, al ser un material bastante más conocido por cuanto se había utilizado muchísimo previamente, el coeficiente de seguridad permaneció casi constante a lo largo de los años, con un valor igual a 2. Otra de las causas de la evolución de los parámetros de cálculo fue el mejor conocimiento del comportamiento de las estructuras merced a la vasta tarea de planificación y ejecución de ensayos, con los estudios teóricos consiguientes, realizados por numerosos autores, principalmente austríacos y alemanes, pero también norteamericanos y franceses. En cuanto a los criterios de cálculo, puede sorprender al técnico de hoy el conocimiento que tenían del comportamiento del hormigón desde los primeros años del empleo del mismo. Sabían del comportamiento no lineal del hormigón, pero limitaban su trabajo a un rango de tensióndeformación lineal porque eso aseguraba una previsión del comportamiento estructural conforme a las hipótesis de la Elasticidad Lineal y de la Resistencia de Materiales, muy bien conocidas a principios del s. XX (no así sucedía con la teoría de la Plasticidad, aún sin formular, aunque estaba implícita en los planteamientos algunos ingenieros especializados en estructuras de fábrica (piedra o ladrillo) y metálicas. Además, eso permitía independizar un tanto el proyecto de los valores de las resistencias reales de los materiales, lo que liberaba de la necesidad de llevar a cabo ensayos que, en la práctica, apenas se podían hacer debido a la escasez de los laboratorios. Tampoco disponían de programas informáticos ni de ninguna de las facilidades de las que hoy se tienen, que les permitiera hacer trabajar al hormigón en un rango no lineal. Así, sabia y prudentemente, limitaban las tensiones y deformaciones del material a un rango conocido. El modus operandi seguido para la elaboración de esta tesis, ha sido el siguiente: -Estudio documental: se han estudiado documentos de autor, recomendaciones y normativa generada en este ámbito, tanto en España como con carácter internacional, de manera sistemática con arreglo al índice del documento. En este proceso, se han detectado lagunas del conocimiento (y su afección a la seguridad estructural, en su caso) y se han identificado las diferencias con los procedimientos de hoy. También ha sido necesario adaptar la notación y terminología de la época a los criterios actuales, lo que ha supuesto una dificultad añadida. -Desarrollo del documento: A partir del estudio previo se han ido desarrollando los siguientes documentos, que conforman el contenido de la tesis: o Personajes e instituciones relevantes por sus aportaciones al conocimiento de las estructuras de hormigón (investigación, normativa, docencia). o Caracterización de las propiedades mecánicas de los materiales (hormigón y armaduras), en relación a sus resistencias, diagramas tensión-deformación, módulos de deformación, diagramas momento-curvatura, etc. Se incluye aquí la caracterización clásica de los hormigones, la geometría y naturaleza de las armaduras, etc. o Formatos de seguridad: Se trata de un complejo capítulo del que se pretende extraer la información suficiente que permita a los técnicos de hoy entender los criterios utilizados entonces y compararlos con los actuales. o Estudio de secciones y piezas sometidas a tensiones normales y tangenciales: Se trata de presentar la evolución en el tratamiento de la flexión simple y compuesta, del cortante, del rasante, torsión, etc. Se tratan también en esta parte del estudio aspectos que, no siendo de preocupación directa de los técnicos de antaño (fisuración y deformaciones), tienen hoy mayor importancia frente a cambios de usos y condiciones de durabilidad. o Detalles de armado: Incluye el tratamiento de la adherencia, el anclaje, el solapo de barras, el corte de barras, las disposiciones de armado en función de la geometría de las piezas y sus solicitaciones, etc. Es un capítulo de importancia obvia para los técnicos de hoy. Se incluye un anejo con las referencias más significativas a los estudios experimentales en que se basaron las propuestas que han marcado hito en la evolución del conocimiento. Finalmente, junto a las conclusiones más importantes, se enuncian las propuestas de estudios futuros. This thesis analyzes the criteria with which structures of reinforced concrete have been designed and constructed prior to 1973. Initially, the year 1970 was chosen as starting point, coinciding with the CEB recommendations, but with the development of the thesis it was decided that 1973 was the better option, coinciding with the Spanish regulations of 1973, whose content, format and description introduced the current criteria. The studied period includes the Classic Theory. The intended goals of this thesis are: 1) To cover a clear gap in the study of evolution of knowledge about reinforced concrete. The concept and accomplishments achieved by reinforced concrete itself has been treated in a very complete way by the main researchers in this area, but not the evolution of knowledge in this subject area. 2) To help the engineers understand structural configurations, geometries, dispositions of steel, safety formats etc, that will serve as preliminary judgments by experts on existing structures. To be a reference to the existing studies about the valuation of resistant capacity of existing constructions, constituting a basic study of a pre-regulation document. This thesis intends to be a help for the current generation of engineers who need to preserve and repair reinforced concrete structures that have existed for a significant number of years. Most of these structures in question were constructed more than 40 years ago, and it is necessary to know the criteria that influenced their design, the calculation and the construction. This thesis intends to determine the safety limits of the old structures and analyze them in the context of the current regulations and their methodology. Thus, it will then be possible to determine the safety of these structures, after being measured and calculated with the current criteria. This will allow the engineers to optimize the treatment of such a structure. This work considers the evolution of the knowledge, so constructive methods are not included. Related to the design criteria, there existed until middle of the 20th century a large number of diverse European tests and regulations, such as the Prussian norm of 1904, the Circular French Order of 1906, the Congress of Liège of 1930, as well as individual engineers’ own notes and criteria which incorporated the results of their own tests. From the second half of the 20th century, the contributions of Spanish engineers as Alfredo Páez Balaca, Eduardo Torroja and Pedro Jiménez Montoya, among others, were significant and this allowed the advancement of the criteria of the calculation of safety standards of concrete structures, many of which still exist to the present day. The design and calculation of reinforced concrete structures by the Classic Theory, was based on the ‘Critical Bending Moment’, when concrete and steel achieve their admissible tensions, that allows the best employment of materials and the best ductility. If the bending moment is major than the critical bending moment, will be necessary to introduce compression steel. After the study of the designs of many existing structures of that time by the author of this thesis, including the Historical Collections of Juan Manuel de Zafra, Eugenio Ribera and Carlos Fernandez Casado, the conclusion is that the geometric definition of the structures does not correspond exactly with the critical bending moment inherent in the structures. The parameters of these calculations changed throughout the years. The principal reason that can be outlined is that the materials were improving gradually and the number of calculated uncertainties were decreasing, thus allowing the reduction of the safety coefficients to use in the calculation. For example, concrete used a coefficient of 4 towards the end of the 19th century, which evolved to 3,57 after the publication of the Circular French Order of 1906, and then to 3 after the Spanish Instruction of 1939. In the case of the steel, a much more consistent material, the safety coefficient remained almost constant throughout the years, with a value of 2. Other reasons related to the evolution of the calculation parameters were that the tests and research undertaken by an ever-increasing number of engineers then allowed a more complete knowledge of the behavior of reinforced concrete. What is surprising is the extent of knowledge that existed about the behavior of the concrete from the outset. Engineers from the early years knew that the behavior of the concrete was non-linear, but they limited the work to a linear tension-deformation range. This was due to the difficulties of work in a non-linear range, because they did not have laboratories to test concrete, or facilities such as computers with appropriate software, something unthinkable today. These were the main reasons engineers of previous generations limited the tensions and deformations of a particular material to a known range. The modus operandi followed for the development of this thesis is the following one: -Document study: engineers’ documents, recommendations and regulations generated in this area, both from Spain or overseas, have been studied in a systematic way in accordance with the index of the document. In this process, a lack of knowledge has been detected concerning structural safety, and differences to current procedures have been identified and noted. Also, it has been necessary to adapt the notation and terminology of the Classic Theory to the current criteria, which has imposed an additional difficulty. -Development of the thesis: starting from the basic study, the next chapters of this thesis have been developed and expounded upon: o People and relevant institutions for their contribution to the knowledge about reinforced concrete structures (investigation, regulation, teaching). Determination of the mechanical properties of the materials (concrete and steel), in relation to their resistances, tension-deformation diagrams, modules of deformation, moment-curvature diagrams, etc. Included are the classic characterizations of concrete, the geometry and nature of the steel, etc. Safety formats: this is a very difficult chapter from which it is intended to provide enough information that will then allow the present day engineer to understand the criteria used in the Classic Theory and then to compare them with the current theories. Study of sections and pieces subjected to normal and tangential tensions: it intends to demonstrate the evolution in the treatment of the simple and complex flexion, shear, etc. Other aspects examined include aspects that were not very important in the Classic Theory but currently are, such as deformation and fissures. o Details of reinforcement: it includes the treatment of the adherence, the anchorage, the lapel of bars, the cut of bars, the dispositions of reinforcement depending on the geometry of the pieces and the solicitations, etc. It is a chapter of obvious importance for current engineers. The document will include an annex with the most references to the most significant experimental studies on which were based the proposals that have become a milestone in the evolution of knowledge in this area. Finally, there will be included conclusions and suggestions of future studies. A deep study of the documentation and researchers of that time has been done, juxtaposing their criteria and results with those considered relevant today, and giving a comparison between the resultant safety standards according to the Classic Theory criteria and currently used criteria. This thesis fundamentally intends to be a guide for engineers who have to treat or repair a structure constructed according to the Classic Theory criteria.
Resumo:
El trabajo quiere ser una visión sintética que describa la evolución de la literatura dramática catalana de la segunda mitad del siglo XX y la primera década del actual, desde los primeros años de la dictadura franquista, en los cuales el teatro en catalán fue prohibido, pasando por el teatro independiente, y los diversos caminos emprendidos durante la etapa democrática o diferentes apuestas del teatro actual, un momento en el cual la escena catalana hace años que es un valor en alza. Se apuntan tanto los autores más consolidados y referentes de la moderna escritura, como los dramaturgos más maduros de las nuevas generaciones actuales. Unos y otros son autores ampliamente representados y traducidos a diversas lenguas.
Resumo:
Context. The current generation of X-ray satellites has discovered many new X-ray sources that are difficult to classify within the well-described subclasses. The hard X-ray source IGR J11215−5952 is a peculiar transient, displaying very short X-ray outbursts every 165 days. Aims. To characterise the source, we obtained high-resolution spectra of the optical counterpart, HD 306414, at different epochs, spanning a total of three months, before and around the 2007 February outburst with the combined aims of deriving its astrophysical parameters and searching for orbital modulation. Methods. We fit model atmospheres generated with the fastwind code to the spectrum, and used the interstellar lines in the spectrum to estimate its distance. We also cross-correlated each individual spectrum to the best-fit model to derive radial velocities. Results. From its spectral features, we classify HD 306414 as B0.5 Ia. From the model fit, we find Teff ≈ 24 700 K and log g ≈ 2.7, in good agreement with the morphological classification. Using the interstellar lines in its spectrum, we estimate a distance to HD 306414 d ≳ 7 kpc. Assuming this distance, we derive R∗ ≈ 40 R⊙ and Mspect ≈ 30 M⊙ (consistent, within errors, with Mevol ≈ 38 M⊙, and in good agreement with calibrations for the spectral type). Analysis of the radial velocity curve reveals that radial velocity changes are not dominated by the orbital motion, and provide an upper limit on the semi-amplitude for the optical component Kopt ≲ 11 ± 6 km s-1. Large variations in the depth and shape of photospheric lines suggest the presence of strong pulsations, which may be the main cause of the radial velocity changes. Very significant variations, uncorrelated with those of the photospheric lines are seen in the shape and position of the Hα emission feature around the time of the X-ray outburst, but large excursions are also observed at other times. Conclusions. HD 306414 is a normal B0.5 Ia supergiant. Its radial velocity curve is dominated by an effect that is different from binary motion, and is most likely stellar pulsations. The data available suggest that the X-ray outbursts are caused by the close passage of the neutron star in a very eccentric orbit, perhaps leading to localised mass outflow.
Resumo:
This paper argues for the systematic development and presentation of evidence-based guidelines for appropriate use of computers by children. The currently available guidelines are characterised and a proposed conceptual model presented. Five principles are presented as a foundation to the guidelines. The paper concludes with a framework for the guidelines, key evidence for and against guidelines, and gaps in the available evidence, with the aim of facilitating further discussion. Relevance to industry The current generation of children in affluent countries will typically have over 10 years of computer experience before they enter the workforce. Consequently, the primary prevention of computer-related health disorders and the development of good productivity skills for the next generation of workers needs to occur during childhood. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The work described in this thesis is concerned with mechanisms of contact lens lubrication. There are three major driving forces in contact lens design and development; cost, convenience, and comfort. Lubrication, as reflected in the coefficient of friction, is becoming recognised as one of the major factors affecting the comfort of the current generation of contact lenses, which have benefited from several decades of design and production improvements. This work started with the study of the in-eye release of soluble macromolecules from a contact lens matrix. The vehicle for the study was the family of CIBA Vision Focus® DAILIES® daily disposable contact lenses which is based on polyvinyl alcohol (PVA). The effective release of linear soluble PVA from DAILIES on the surface of the lens was shown to be beneficial in terms of patient comfort. There was a need to develop a novel characterisation technique in order to study these effects at surfaces; this led to the study of a novel tribological technique, which allowed the friction coefficients of different types of contact lenses to be measured reproducibly at genuinely low values. The tribometer needed the ability to accommodate the following features: (a) an approximation to eye lid load, (b) both new and ex-vivo lenses, (c) variations in substrate, (d) different ocular lubricants (including tears). The tribometer and measuring technique developed in this way was used to examine the surface friction and lubrication mechanisms of two different types of contact lenses: daily disposables and silicone hydrogels. The results from the tribometer in terms of both mean friction coefficient and the friction profiles obtained allowed various mechanisms used for surface enhancement now seen in the daily disposable contact lens sector to be evaluated. The three major methods used are: release of soluble macromolecules (such as PVA) from the lens matrix, irreversible surface binding of a macromolecule (such as polyvinyl pyrrolidone) by charge transfer and the simple polymer adsorption (e.g. Pluoronic) at the lens surface. The tribological technique was also used to examine the trends in the development of silicone hydrogel contact lenses. The focus of the principles in the design of silicone hydrogels has now shifted from oxygen permeability, to the improvement of surface properties. Presently, tribological studies reflect the most effective in vitro method of surface evaluation in relation to the in-eye comfort.
Resumo:
In recent years there has been a great effort to combine the technologies and techniques of GIS and process models. This project examines the issues of linking a standard current generation 2½d GIS with several existing model codes. The focus for the project has been the Shropshire Groundwater Scheme, which is being developed to augment flow in the River Severn during drought periods by pumping water from the Shropshire Aquifer. Previous authors have demonstrated that under certain circumstances pumping could reduce the soil moisture available for crops. This project follows earlier work at Aston in which the effects of drawdown were delineated and quantified through the development of a software package that implemented a technique which brought together the significant spatially varying parameters. This technique is repeated here, but using a standard GIS called GRASS. The GIS proved adequate for the task and the added functionality provided by the general purpose GIS - the data capture, manipulation and visualisation facilities - were of great benefit. The bulk of the project is concerned with examining the issues of the linkage of GIS and environmental process models. To this end a groundwater model (Modflow) and a soil moisture model (SWMS2D) were linked to the GIS and a crop model was implemented within the GIS. A loose-linked approach was adopted and secondary and surrogate data were used wherever possible. The implications of which relate to; justification of a loose-linked versus a closely integrated approach; how, technically, to achieve the linkage; how to reconcile the different data models used by the GIS and the process models; control of the movement of data between models of environmental subsystems, to model the total system; the advantages and disadvantages of using a current generation GIS as a medium for linking environmental process models; generation of input data, including the use of geostatistic, stochastic simulation, remote sensing, regression equations and mapped data; issues of accuracy, uncertainty and simply providing adequate data for the complex models; how such a modelling system fits into an organisational framework.
Resumo:
While much of a company's knowledge can be found in text repositories, current content management systems have limited capabilities for structuring and interpreting documents. In the emerging Semantic Web, search, interpretation and aggregation can be addressed by ontology-based semantic mark-up. In this paper, we examine semantic annotation, identify a number of requirements, and review the current generation of semantic annotation systems. This analysis shows that, while there is still some way to go before semantic annotation tools will be able to address fully all the knowledge management needs, research in the area is active and making good progress.
Resumo:
Advancements in retinal imaging technologies have drastically improved the quality of eye care in the past couple decades. Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) are two examples of critical imaging modalities for the diagnosis of retinal pathologies. However current-generation SLO and OCT systems have limitations in diagnostic capability due to the following factors: the use of bulky tabletop systems, monochromatic imaging, and resolution degradation due to ocular aberrations and diffraction.
Bulky tabletop SLO and OCT systems are incapable of imaging patients that are supine, under anesthesia, or otherwise unable to maintain the required posture and fixation. Monochromatic SLO and OCT imaging prevents the identification of various color-specific diagnostic markers visible with color fundus photography like those of neovascular age-related macular degeneration. Resolution degradation due to ocular aberrations and diffraction has prevented the imaging of photoreceptors close to the fovea without the use of adaptive optics (AO), which require bulky and expensive components that limit the potential for widespread clinical use.
In this dissertation, techniques for extending the diagnostic capability of SLO and OCT systems are developed. These techniques include design strategies for miniaturizing and combining SLO and OCT to permit multi-modal, lightweight handheld probes to extend high quality retinal imaging to pediatric eye care. In addition, a method for extending true color retinal imaging to SLO to enable high-contrast, depth-resolved, high-fidelity color fundus imaging is demonstrated using a supercontinuum light source. Finally, the development and combination of SLO with a super-resolution confocal microscopy technique known as optical photon reassignment (OPRA) is demonstrated to enable high-resolution imaging of retinal photoreceptors without the use of adaptive optics.
Resumo:
The majority of electrode materials in batteries and related electrochemical energy storage devices are fashioned into slurries via the addition of a conductive additive and a binder. However, aggregation of smaller diameter nanoparticles in current generation electrode compositions can result in non-homogeneous active materials. Inconsistent slurry formulation may lead to inconsistent electrical conductivity throughout the material, local variations in electrochemical response, and the overall cell performance. Here we demonstrate the hydrothermal preparation of Ag nanoparticle (NP) decorated α-AgVO3 nanowires (NWs) and their conversion to tunnel structured β-AgVO3 NWs by annealing to form a uniform blend of intercalation materials that are well connected electrically. The synthesis of nanostructures with chemically bound conductive nanoparticles is an elegant means to overcome the intrinsic issues associated with electrode slurry production, as wire-to-wire conductive pathways are formed within the overall electrode active mass of NWs. The conversion from α-AgVO3 to β-AgVO3 is explained in detail through a comprehensive structural characterization. Meticulous EELS analysis of β-AgVO3 NWs offers insight into the true β-AgVO3 structure and how the annealing process facilitates a higher surface coverage of Ag NPs directly from ionic Ag content within the α-AgVO3 NWs. Variations in vanadium oxidation state across the surface of the nanowires indicate that the β-AgVO3 NWs have a core–shell oxidation state structure, and that the vanadium oxidation state under the Ag NP confirms a chemically bound NP from reduction of diffused ionic silver from the α-AgVO3 NWs core material. Electrochemical comparison of α-AgVO3 and β-AgVO3 NWs confirms that β-AgVO3 offers improved electrochemical performance. An ex situ structural characterization of β-AgVO3 NWs after the first galvanostatic discharge and charge offers new insight into the Li+ reaction mechanism for β-AgVO3. Ag+ between the van der Waals layers of the vanadium oxide is reduced during discharge and deposited as metallic Ag, the vacant sites are then occupied by Li+.
Resumo:
Urban centers all around the world are striving to re-orient themselves to promoting ideals of human engagement, flexibility, openness and synergy, that thoughtful architecture can provide. From a time when solitude in one’s own backyard was desirable, today’s outlook seeks more, to cater to the needs of diverse individuals and that of collaborators. This thesis is an investigation of the role of architecture in realizing how these ideals might be achieved, using Mixed Use Developments as the platform of space to test these designs ideas on. The author also investigates, identifies, and re-imagines how the idea of live-work excites and attracts users and occupants towards investing themselves in Mixed Used Developments (MUD’s), in urban cities. On the premise that MUDs historically began with an intention of urban revitalization, lying in the core of this spatial model, is the opportunity to investigate what makes mixing of uses an asset, especially in the eyes to today’s generation. Within the framework of reference to the current generation, i.e. the millennial population and alike, who have a lifestyle core that is urban-centric, the excitement for this topic is in the vision of MUD’s that will spatially cater to a variety in lifestyles, demographics, and functions, enabling its users to experience a vibrant 24/7 destination. Where cities are always in flux, the thesis will look to investigate the idea of opportunistic space, in a new MUD, that can also be perceived as an adaptive reuse of itself. The sustainability factor lies in the foresight of the transformative and responsive character of the different uses in the MUD at large, which provides the possibility to cater to a changing demand of building use over time. Delving into the architectural response, the thesis in the process explores, conflicts, tensions, and excitements, and the nature of relationships between different spatial layers of permanence vs. transformative, public vs. private, commercial vs. residential, in such an MUD. At a larger scale, investigations elude into the formal meaning and implications of the proposed type of MUD’s and the larger landscapes in which they are situated, with attempts to blur the fine line between architecture and urbanism. A unique character of MUD’s is the power it has to draw in people at the ground level and lead them into exciting spatial experiences. While the thesis stemmed from a purely objective and theoretical standpoint, the author believes that it is only when context is played into the design thinking process, that true architecture may start to flourish. The unique The significance of this thesis lies on the premise that the author believes that this re-imagined MUD has immense opportunity to amplify human engagement with designed space, and in the belief that it will better enable fostering sustainable communities and in the process, enhance people’s lives.
Resumo:
The Geminga pulsar, one of the brighest gamma-ray sources, is a promising candidate for emission of very-high-energy (VHE > 100 GeV) pulsed gamma rays. Also, detection of a large nebula have been claimed by water Cherenkov instruments. We performed deep observations of Geminga with the MAGIC telescopes, yielding 63 hours of good-quality data, and searched for emission from the pulsar and pulsar wind nebula. We did not find any significant detection, and derived 95% confidence level upper limits. The resulting upper limits of 5.3 × 10^(−13) TeV cm^(−2)s^(−1) for the Geminga pulsar and 3.5 × 10^(−12) TeV cm^(−2)s^(−1) for the surrounding nebula at 50 GeV are the most constraining ones obtained so far at VHE. To complement the VHE observations, we also analyzed 5 years of Fermi-LAT data from Geminga, finding that the sub-exponential cut-off is preferred over the exponential cut-off that has been typically used in the literature. We also find that, above 10 GeV, the gamma-ray spectra from Geminga can be described with a power law with index softer than 5. The extrapolation of the power-law Fermi-LAT pulsed spectra to VHE goes well below the MAGIC upper limits, indicating that the detection of pulsed emission from Geminga with the current generation of Cherenkov telescopes is very difficult.
Resumo:
Fully articulated hand tracking promises to enable fundamentally new interactions with virtual and augmented worlds, but the limited accuracy and efficiency of current systems has prevented widespread adoption. Today's dominant paradigm uses machine learning for initialization and recovery followed by iterative model-fitting optimization to achieve a detailed pose fit. We follow this paradigm, but make several changes to the model-fitting, namely using: (1) a more discriminative objective function; (2) a smooth-surface model that provides gradients for non-linear optimization; and (3) joint optimization over both the model pose and the correspondences between observed data points and the model surface. While each of these changes may actually increase the cost per fitting iteration, we find a compensating decrease in the number of iterations. Further, the wide basin of convergence means that fewer starting points are needed for successful model fitting. Our system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets. We can track up to several meters from the camera to provide a large working volume for interaction, even using the noisy data from current-generation depth cameras. Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy. Qualitative results take the form of live recordings of a range of interactive experiences enabled by this new approach.
Resumo:
This study examines the role of visual literacy in learning biology. Biology teachers promote the use of digital images as a learning tool for two reasons: because biology is the most visual of the sciences, and the use of imagery is becoming increasingly important with the advent of bioinformatics; and because studies indicate that this current generation of teenagers have a cognitive structure that is formed through exposure to digital media. On the other hand, there is concern that students are not being exposed enough to the traditional methods of processing biological information - thought to encourage left-brain sequential thinking patterns. Theories of Embodied Cognition point to the importance of hand-drawing for proper assimilation of knowledge, and theories of Multiple Intelligences suggest that some students may learn more easily using traditional pedagogical tools. To test the claim that digital learning tools enhance the acquisition of visual literacy in this generation of biology students, a learning intervention was carried out with 33 students enrolled in an introductory college biology course. The study compared learning outcomes following two types of learning tools. One learning tool was a traditional drawing activity, and the other was an interactive digital activity carried out on a computer. The sample was divided into two random groups, and a crossover design was implemented with two separate interventions. In the first intervention students learned how to draw and label a cell. Group 1 learned the material by computer and Group 2 learned the material by hand-drawing. In the second intervention, students learned how to draw the phases of mitosis, and the two groups were inverted. After each learning activity, students were given a quiz on the material they had learned. Students were also asked to self-evaluate their performance on each quiz, in an attempt to measure their level of metacognition. At the end of the study, they were asked to fill out a questionnaire that was used to measure the level of task engagement the students felt towards the two types of learning activities. In this study, following the first testing phase, the students who learned the material by drawing had a significantly higher average grade on the associated quiz compared to that of those who learned the material by computer. The difference was lost with the second “cross-over” trial. There was no correlation for either group between the grade the students thought they had earned through self-evaluation, and the grade that they received. In terms of different measures of task engagement, there were no significant differences between the two groups. One finding from the study showed a positive correlation between grade and self-reported time spent playing video games, and a negative correlation between grade and self-reported interest in drawing. This study provides little evidence to support claims that the use of digital tools enhances learning, but does provide evidence to support claims that drawing by hand is beneficial for learning biological images. However, the small sample size, limited number and type of learning tasks, and the indirect means of measuring levels of metacognition and task engagement restrict generalisation of these conclusions. Nevertheless, this study indicates that teachers should not use digital learning tools to the exclusion of traditional drawing activities: further studies on the effectiveness of these tools are warranted. Students in this study commented that the computer tool seemed more accurate and detailed - even though the two learning tools carried identical information. Thus there was a mismatch between the perception of the usefulness of computers as a learning tool and the reality, which again points to the need for an objective assessment of their usefulness. Students should be given the opportunity to try out a variety of traditional and digital learning tools in order to address their different learning preferences.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
The thesis investigates the potential of photoactive organic semiconductors as a new class of materials for developing bioelectronic devices that can convert light into biological signals. The materials can be either small molecules or polymers. When these materials interact with aqueous biological fluids, they give rise to various electrochemical phenomena, including photofaradaic or photocapacitive processes, depending on whether photogenerated charges participate in redox processes or accumulate at an interface. The thesis starts by studying the behavior of the H2Pc/PTCDI molecular p/n thin-film heterojunction in contact with aqueous electrolyte. An equivalent circuit model is developed, explaining the measurements and predicting behavior in wireless mode. A systematic study on p-type polymeric thin-films is presented, comparing rr-P3HT with two low bandgap conjugated polymers: PBDB-T and PTB7. The results demonstrate that PTB7 has superior photocurrent performance due to more effective electron-transfer onto acceptor states in solution. Furthermore, the thesis addresses the issue of photovoltage generation for wireless photoelectrodes. An analytical model based on photoactivated charge-transfer across the organic-semiconductor/water interface is developed, explaining the large photovoltages observed for polymeric p-type semiconductor electrodes in water. Then, flash-precipitated nanoparticles made of the same three photoactive polymers are investigated, assessing the influence of fabrication parameters on the stability, structure, and energetics of the nanoparticles. Photocathodic current generation and consequent positive charge accumulation is also investigated. Additionally, newly developed porous P3HT thin-films are tested, showing that porosity increases both the photocurrent and the semiconductor/water interfacial capacity. Finally, the thesis demonstrates the biocompatibility of the materials in in-vitro experiments and shows safe levels of photoinduced intracellular ROS production with p-type polymeric thin-films and nanoparticles. The findings highlight the potential of photoactive organic semiconductors in the development of optobioelectronic devices, demonstrating their ability to convert light into biological signals and interface with biological fluids.