970 resultados para built-up edge
Resumo:
Open Access funded by European Research Council Acknowledgements RB thanks GRADUS, Faculty 8.4. Natural Sciences, of Saarland University for partially funding his research visit to the University of California, Santa Barbara. RB would also thank to Dr. S. Khaderi for his help in setting up the model. He also thanks Dr. R. Hensel and Dr. N. Guimard for fruitful discussions and for their continuous support. EA acknowledges funding from the European Research Council under the European Union's Seventh Framework Program (FP/2007-2013)/ERC Advanced Grant no. 340929.
Resumo:
The crystal structure of raite was solved and refined from data collected at Beamline Insertion Device 13 at the European Synchrotron Radiation Facility, using a 3 × 3 × 65 μm single crystal. The refined lattice constants of the monoclinic unit cell are a = 15.1(1) Å; b = 17.6(1) Å; c = 5.290(4) Å; β = 100.5(2)°; space group C2/m. The structure, including all reflections, refined to a final R = 0.07. Raite occurs in hyperalkaline rocks from the Kola peninsula, Russia. The structure consists of alternating layers of a hexagonal chicken-wire pattern of 6-membered SiO4 rings. Tetrahedral apices of a chain of Si six-rings, parallel to the c-axis, alternate in pointing up and down. Two six-ring Si layers are connected by edge-sharing octahedral bands of Na+ and Mn3+ also parallel to c. The band consists of the alternation of finite Mn–Mn and Na–Mn–Na chains. As a consequence of the misfit between octahedral and tetrahedral elements, regions of the Si–O layers are arched and form one-dimensional channels bounded by 12 Si tetrahedra and 2 Na octahedra. The channels along the short c-axis in raite are filled by isolated Na(OH,H2O)6 octahedra. The distorted octahedrally coordinated Ti4+ also resides in the channel and provides the weak linkage of these isolated Na octahedra and the mixed octahedral tetrahedral framework. Raite is structurally related to intersilite, palygorskite, sepiolite, and amphibole.
Resumo:
Este trabalho apresenta as avaliações de desempenho, das demandas operacionais e dos fatores intervenientes no aumento da escala da unidade piloto do Reator anaeróbio Horizontal de Leito Fixo (RAHLF) no tratamento de esgoto sanitário após passagem por peneira com malha de 1 mm, durante dois anos de operação. O reator dispunha de volume total de 237,5 1, construídos com tubos comerciais de PVC de 14,5 cm de diâmetro (D), dispostos em cinco módulos horizontais em série de 2,88 m, perfazendo um comprimento total de (L) de 14,4 m e relação de total de L/D de 100. O suporte de imobilização de biomassa, espuma de poliuretano em matrizes cúbicas de 1 cm de aresta, mostrou-se adequado ao desenvolvimento do biofilme. Em partida, sem inoculação prévia, ocorreu a sua consolidação a partir de 70 dias, com predominância de morfologia semelhante a Methanosaeta sp. em relação a da Methanosarcina. Em torno de 90 dias com afluente de 350 mg/l de DQO, observe-se a melhor qualidade do efluente, com valor de 100 mg/l de DQO. Em longa operação ocorreu queda de rendimento e menor reprodutibilidade das previsões do projeto, atribuída aos constantes entupimentos e ineficácia das operações de limpeza, com o comprometimento de volume reacional verificados por estudos de hidrodinâmica. Da investigação das origens dos equipamentos observou-se tratar mais de um efeito local e qualitativamente relacionado à biomassa retida que propriamente quantitativo e extensivo ao longo de todo reator, com produção continuada de polímeros extracelulares, promovendo um efeito sinérgico com os predominantes organismos filamentosos e com os sólidos particulados retidos no leito. Diante das potencialidades desta configuração de reator apontam-se alternativas de mitigação dos entupimentos e o direcionamento dos estudos necessários para novo aumento de escala para o tratamento de esgoto sanitário.
Resumo:
Following the death of engineer General Jorge Próspero de Verboom in 1744 and after a few years of transition in the management of Spanish fortifications, Juan Martín Zermeño took on the role, initially with a temporary mandate, but then definitively during a second period that ran from 1766 until his death in 1772. He began this second period with a certain amount of concern because of what had taken place during the last period of conflict. The Seven Years War (1756–1763) which had brought Spain into conflict with Portugal and England in the Caribbean had also lead to conflict episodes along the Spanish–Portuguese border. Zermeño’s efforts as a planner and general engineer gave priority to the northern part of the Spanish–Portuguese border. After studying the territory and the existing fortifications on both sides of the border, Zermeño drew up three important projects in 1766. The outposts that needed to be reinforced were located, from north to south, at Puebla de Sanabria, Zamora and Ciudad Rodrigo, which is where he is believed to have come from. This latter township already had a modern installation built immediately after the war of the Spanish Succession and reinforced with the Fort of La Concepción. However, Zamora and Puebla de Sanabria had some obsolete fortifications that needed modernising. Since the middle of the 15th century Puebla de Sanabria had had a modern castle with rounded turrets, that of the counts of benavente. During the 16th and 17th centuries it had also been equipped with a walled enclosure with small bastions. During the war of the Spanish Succession the Portuguese had enlarged the enclosure and had erected a tentative offshoot to the west. In order to draw up the ambitious Puebla de Sanabria project Zermeño had the aid of some previous reports and projects, such as those by the count of robelin in 1722, the one by Antonio de Gaver in 1752, and Pedro Moreau’s report dated June 1755. This study includes a technical analysis of Zermeño’s project and its strategic position within the system of fortifications along the Spanish–Portuguese border.
Resumo:
Abstract Mobile Edge Computing enables the deployment of services, applications, content storage and processing in close proximity to mobile end users. This highly distributed computing environment can be used to provide ultra-low latency, precise positional awareness and agile applications, which could significantly improve user experience. In order to achieve this, it is necessary to consider next-generation paradigms such as Information-Centric Networking and Cloud Computing, integrated with the upcoming 5th Generation networking access. A cohesive end-to-end architecture is proposed, fully exploiting Information-Centric Networking together with the Mobile Follow-Me Cloud approach, for enhancing the migration of content-caches located at the edge of cloudified mobile networks. The chosen content-relocation algorithm attains content-availability improvements of up to 500 when a mobile user performs a request and compared against other existing solutions. The performed evaluation considers a realistic core-network, with functional and non-functional measurements, including the deployment of the entire system, computation and allocation/migration of resources. The achieved results reveal that the proposed architecture is beneficial not only from the users’ perspective but also from the providers point-of-view, which may be able to optimize their resources and reach significant bandwidth savings.
Resumo:
The procedure for successful scale-up of batchwise emulsion polymerisation has been studied. The relevant literature on liquid-liquid dispersion on scale-up and on emulsion polymerisation has been crit1cally reviewed. Batchwise emulsion polymerisation of styrene in a specially built 3 litre, unbaffled, reactor confirmed that impeller speed had a direct effect on the latex particle size and on the reaction rate. This was noted to be more significant at low soap concentrations and the phenomenon was related to the depletion of micelle forming soap by soap adsorption onto the monomer emulsion surface. The scale-up procedure necessary to maintain constant monomer emulsion surface area in an unbaffled batch reactor was therefore investigated. Three geometrically similar 'vessels of 152, 229 and 305mm internal diameter, and a range of impeller speeds (190 to 960 r.p.m.) were employed. The droplet sizes were measured either through photomicroscopy or via a Coulter Counter. The power input to the impeller was also measured. A scale-up procedure was proposed based on the governing relationship between droplet diameter, impeller speed and impeller diameter. The relationships between impeller speed soap concentration, latex particle size and reaction rate were investigated in a series of polymerisations employing an amended commercial recipe for polystyrene. The particle size was determined via a light transmission technique. Two computer models, based on the Smith and Ewart approach but taking into account the adsorption/desorption of soap at the monomer surface, were successful 1n predicting the particle size and the progress of the reaction up to the end of stage II, i.e. to the end of the period of constant reaction rate.
Resumo:
We have previously described ProxiMAX, a technology that enables the fabrication of precise, combinatorial gene libraries via codon-by-codon saturation mutagenesis. ProxiMAX was originally performed using manual, enzymatic transfer of codons via blunt-end ligation. Here we present Colibra™: an automated, proprietary version of ProxiMAX used specifically for antibody library generation, in which double-codon hexamers are transferred during the saturation cycling process. The reduction in process complexity, resulting library quality and an unprecedented saturation of up to 24 contiguous codons are described. Utility of the method is demonstrated via fabrication of complementarity determining regions (CDR) in antibody fragment libraries and next generation sequencing (NGS) analysis of their quality and diversity.
Resumo:
The objective of this thesis was to investigate the effects of the built environment on the outcome of young patients. This investigation included recent innovations in children's hospitals that integrated both medical and architectural case studies as part of their design issues. In addition, the intervention responded to man-made conditions and natural elements of the site. The thesis project, a Children's Rehabilitation Hospital, is located at 1500 N.W. River Drive in Miami, Florida. The thesis intervention emerged from a site analysis that focused on the shifting of the urban grid, the variation in scale of the immediate context and the visual-physical connection to the river's edge. Furthermore, it addressed the issues of overnight accommodation for patient's families, as well as sound control through the use of specific materials in space enclosures and open courtyards. The key to the success of this intervention lies in the special attention given to the integration between nature and the built environment. Issues such as the incorporation of nature within a building through the use of vistas and the exploitation of natural light through windows and skylights, were pivotal in the creation of a pleasant environment for visitors, employees and young patients.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The paper explores informal settlement upgrading approaches in South Africa and presents a review of top-down vs. bottom-up models, using experience and lessons learned from the Durban metropolitan area. Reflections on past upgrading efforts suggest that top-down policies in South Africa have not been successful to date. By contrast, participatory techniques, such as planning activism, can be used to enhance community empowerment and a sense of local ownership. This paper reveals that although the notion of ‘bottom-up’, participatory methods for community improvement is often discussed in international development discourses, the tools, processes and new knowledge needed to ensure a successful upgrade are under-utilised. Participation and collaboration can mean various things for informal housing upgrading and often the involvement of local communities is limited to providing feedback in already agreed development decisions from local authorities and construction companies. The paper concludes by suggesting directions for ‘co-producing’ knowledge with communities through participatory, action-research methods and integrating these insights into upgrading mechanisms and policies for housing and infrastructure provision. The cumulative impacts emerging from these approaches could aggregate into local, regional, and national environmental, social and economic benefits able to successfully transform urban areas and ensure self-reliance for local populations.
Resumo:
A dense grid of high- and very high resolution seismic data, together with piston cores and borehole data providing time constraints, enables us to reconstruct the history of the Bourcart canyon head in the western Mediterranean Sea during the last glacial/interglacial cycle. The canyon fill is composed of confined channel–levee systems fed by a series of successively active shelf fluvial systems, originating from the west and north. Most of the preserved infill corresponds to the interval between Marine Isotope Stage (MIS) 3 and the early deglacial (19 cal ka BP). Its deposition was strongly controlled by a relative sea level that impacted the direct fluvial/canyon connection. During a period of around 100 kyr between MIS 6 and MIS 2, the canyon “prograded” by about 3 km. More precisely, several parasequences can be identified within the canyon fill. They correspond to forced-regressed parasequences (linked to punctuated sea-level falls) topped by a progradational-aggradational parasequence (linked to a hypothetical 19-ka meltwater pulse (MWP)). The bounding surfaces between forced-regressed parasequences are condensed intervals formed during intervals of relative sediment starvation due to flooding episodes. The meandering pattern of the axial incision visible within the canyon head, which can be traced landward up to the Agly paleo-river, is interpreted as the result of hyperpycnal flows initiated in the river mouth in a context of increased rainfall and mountain glacier flushing during the early deglacial.
Resumo:
This thesis explores how architectures sense of place is rooted in the natural environment. The built environment has been constructed to protect and sustain human culture from the weathering of nature. Separating experience from the natural environment removes a sense of place and belonging in the natural and reinforces architectural dominance. This separation distinguishes the natural world as an article of spectacle and gives the human experience an unnatural voyeurship to natural changes. By examining the fusion of architectural and natural edges this thesis analyzes how the human experience can reconnect with a naturalistic sense of place through architecture, blending the finite edge where architecture maintains nature, and adapting buildings to the cycles of the environment. Removing dominance of man-made spaces and replacing them with the cohabitation of the edge between built and natural forms.
Resumo:
En el planteamiento de la presente disertación, se tienen en consideración las premisas sobre los orígenes de la ciudad, su influencia industrial y su resolución a través de la Teoría de la Lógica Social del Espacio como herramienta de trabajo para los análisis sobre los planes urbanos propuestos para el municipio a estudio, su historia y para la elaboración de propuestas de interacción futuras. Inicialmente, la propuesta surge de la importancia de las vías de comunicación, como elemento creador urbano, es decir, de una determinada infraestructura, la calle-carretera como el “eje” de consolidación de la ciudad, donde se realizan la mayoría de recorridos o flujos en este caso, ejemplificado en la villa de Porriño. El urbanismo pos-moderno, hasta finales de 1970 e inicios de 1980, no abordaba una articulación entre social y técnica, una ciencia de lucha de clases. En este contexto, en 1984, Bill Hillier y Julienne Hanson escriben “La Lógica Social del Espacio” donde argumentan que los movimientos o flujos obedecen a una lógica racional, según la cual, cualquier desplazamiento es llevado a cabo por el menor recorrido y, por tanto, el urbanismo influye en esos flujos. La configuración urbana genera condiciones de accesibilidad y da origen a una diferenciación espacial jerarquizada con conceptos como la conectividad, integración y segregación en un espacio influenciado y construido por la dinámica social. De este modo, la Teoría de la Sintaxis Espacial describe la configuración del trazado urbano y las relaciones entre el espacio público y privado a través de medidas cuantitativas, las cuales permiten entender aspectos importantes del sistema urbano como la accesibilidad y la distribución y usos del suelo consolidado. Por tanto, en la teoría existe una correlación establecida entre las propiedades de los elementos presentados, espacio y sociedad, de modo que cada elemento incumbe al otro y no existiría sin su presencia, siendo fundamental para la definición de la forma. Esta herramienta, Space Syntax, busca la integración del espacio en la ciudad, mediante un análisis y una evaluación a diversas escalas en la red urbana, con una correcta distribución de los espacios, sus usos y las vías de transporte o comunicación necesarios para llegar a los diferentes lugares de la ciudad. La realización de la investigación se centrará en el análisis de la infraestructura viaria en el municipio de Porriño y sus regiones colindantes a lo largo del período histórico analizado, centrándose en tres momentos históricos, 1956, 1986 y la actualidad 2015, debido a los vuelos fotogramétricos de dichas etapas. Así, se obtendrán las respectivas mediciones correspondientes a través del programa informático Depthmap, las cuales se contrarrestarán y compararán entre sí en cada etapa analizada y entre los propios años examinados, para lograr obtener las consideraciones establecidas a lo largo del estudio en cuanto a la influencia comunicativa de los flujos de interacción social en el entorno urbano definido y la respectiva lucha de los sectores industriales y residenciales. En conclusión, se pretende la justificación del origen de la ciudad a través de la comparación sus vías, las cuales fueron fomentadas por el comercio e industria para su creación, dotando así a la industria del dominio del espacio para satisfacer sus necesidades, creando y ampliando su área de intervención, la cual puede ser analizada y tratada no sólo independientemente, sino en el conjunto urbano en el que se sitúa, Porriño.
Resumo:
Shearing is the process where sheet metal is mechanically cut between two tools. Various shearing technologies are commonly used in the sheet metal industry, for example, in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material. The constant development of sheet metals toward higher strength and formability leads to increased forces on the shearing equipment and tools. Shearing of new sheet materials imply new suitable shearing parameters. Investigations of the shearing parameters through live tests in the production are expensive and separate experiments are time consuming and requires specialized equipment. Studies involving a large number of parameters and coupled effects are therefore preferably performed by finite element based simulations. Accurate experimental data is still a prerequisite to validate such simulations. There is, however, a shortage of accurate experimental data to validate such simulations. In industrial shearing processes, measured forces are always larger than the actual forces acting on the sheet, due to friction losses. Shearing also generates a force that attempts to separate the two tools with changed shearing conditions through increased clearance between the tools as result. Tool clearance is also the most common shearing parameter to adjust, depending on material grade and sheet thickness, to moderate the required force and to control the final sheared edge geometry. In this work, an experimental procedure that provides a stable tool clearance together with accurate measurements of tool forces and tool displacements, was designed, built and evaluated. Important shearing parameters and demands on the experimental set-up were identified in a sensitivity analysis performed with finite element simulations under the assumption of plane strain. With respect to large tool clearance stability and accurate force measurements, a symmetric experiment with two simultaneous shears and internal balancing of forces attempting to separate the tools was constructed. Steel sheets of different strength levels were sheared using the above mentioned experimental set-up, with various tool clearances, sheet clamping and rake angles. Results showed that tool penetration before fracture decreased with increased material strength. When one side of the sheet was left unclamped and free to move, the required shearing force decreased but instead the force attempting to separate the two tools increased. Further, the maximum shearing force decreased and the rollover increased with increased tool clearance. Digital image correlation was applied to measure strains on the sheet surface. The obtained strain fields, together with a material model, were used to compute the stress state in the sheet. A comparison, up to crack initiation, of these experimental results with corresponding results from finite element simulations in three dimensions and at a plane strain approximation showed that effective strains on the surface are representative also for the bulk material. A simple model was successfully applied to calculate the tool forces in shearing with angled tools from forces measured with parallel tools. These results suggest that, with respect to tool forces, a plane strain approximation is valid also at angled tools, at least for small rake angles. In general terms, this study provide a stable symmetric experimental set-up with internal balancing of lateral forces, for accurate measurements of tool forces, tool displacements, and sheet deformations, to study the effects of important shearing parameters. The results give further insight to the strain and stress conditions at crack initiation during shearing, and can also be used to validate models of the shearing process.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.