985 resultados para Reasonable time


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Direito - FCHS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Network virtualization is a promising technique for building the Internet of the future since it enables the low cost introduction of new features into network elements. An open issue in such virtualization is how to effect an efficient mapping of virtual network elements onto those of the existing physical network, also called the substrate network. Mapping is an NP-hard problem and existing solutions ignore various real network characteristics in order to solve the problem in a reasonable time frame. This paper introduces new algorithms to solve this problem based on 0–1 integer linear programming, algorithms based on a whole new set of network parameters not taken into account by previous proposals. Approximative algorithms proposed here allow the mapping of virtual networks on large network substrates. Simulation experiments give evidence of the efficiency of the proposed algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A combinatorial protocol (CP) is introduced here to interface it with the multiple linear regression (MLR) for variable selection. The efficiency of CP-MLR is primarily based on the restriction of entry of correlated variables to the model development stage. It has been used for the analysis of Selwood et al data set [16], and the obtained models are compared with those reported from GFA [8] and MUSEUM [9] approaches. For this data set CP-MLR could identify three highly independent models (27, 28 and 31) with Q2 value in the range of 0.632-0.518. Also, these models are divergent and unique. Even though, the present study does not share any models with GFA [8], and MUSEUM [9] results, there are several descriptors common to all these studies, including the present one. Also a simulation is carried out on the same data set to explain the model formation in CP-MLR. The results demonstrate that the proposed method should be able to offer solutions to data sets with 50 to 60 descriptors in reasonable time frame. By carefully selecting the inter-parameter correlation cutoff values in CP-MLR one can identify divergent models and handle data sets larger than the present one without involving excessive computer time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report a case of tularemia in a common marmoset (Callithrix jacchus) diagnosed by determination of the isolate's 16S ribosomal RNA (rRNA) gene sequence. Pathological examination of the animal revealed a multifocal acute necrotizing hepatitis, interstitial nephritis, splenitis, and lymphangitis of the mandibular, retropharyngeal, and cervical and mesenteric lymph nodes. Moreover, multiple foci of acute necrosis were found in the epithelium of the jejunum and the interstitium of the lung. Bacteriological investigations revealed a septicemia. The isolated infectious agent was uncommon, not routinely diagnosed in our laboratory and therefore difficult to identify by conventional tools in a reasonable time and effort. thus, we decided to perform a genetic analysis based on the 16S rRNA gene sequence. Thereby, an infection with Francisella tularensis, the causative agent of tularemia, was unambiguously diagnosed. This shows the great advantage 16S rRNA gene sequencing has as a general identification approach for unusual or rare isolates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The mechanisms responsible for the determination of phenotypes are still not well understood; however, it has become apparent that modifier genes must play a considerable role in the phenotypic heterogeneity of Mendelian disorders. Significant advances in genetic technologies and molecular medicine allow huge amounts of information to be generated from individual samples within a reasonable time frame. This review focuses on the role of modifier genes using the example of cystic fibrosis, the most common lethal autosomal recessive disorder in the white population, and discusses the advantages and limitations of candidate gene approaches versus genome-wide association studies. Moreover, the implications of modifier gene research for other monogenic disorders, as well as its significance for diagnostic, prognostic, and therapeutic approaches are summarized. Increasing insight into modifying mechanisms opens up new perspectives, dispelling the idea of genetic disorders being caused by one single gene.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ROV operations had three objectives: (1) to check, whether the "Cherokee" system is suited for advanced benthological work in the high latitude Antarctic shelf areas; (2) to support the disturbance experiment, providing immediate visual Information; (3) to continue ecological work that started in 1989 at the hilltop situated at the northern margin of the Norsel Bank off the 4-Seasons Inlet (Weddell Sea). The "Cherokee" is was equipped with 3 video cameras, 2 of which support the operation. A high resolution Tritech Typhoon camera is used for scientific observations to be recorded. In addition, the ROV has a manipulator, a still camera, lights and strobe, compass, 2 lasers, a Posidonia transponder and an obstacle avoidance Sonar. The size of the vehicle is 160 X 90 X 90cm. In the present configuration without TMS (tether management system) the deployment has to start with paying out the full cable length, lay it in loops on deck and connect the glass fibres at the tether's spool winch. After a final technical check the vehicle is deployed into the water, actively driven perpendicular to the ship's axis and floatings are fixed to the tether. At a cable length of approx. 50 m, the tether is tightened to the depressor by several cable ties and both components are lowered towards the sea floor, the vehicle by the thruster's propulsion and the depressor by the ship's winch. At 5 m intervals the tether has to be tied to the single conductor cable. In good weather conditions the instruments supporting the navigation of the ROV, especially the Posidonia system, allow an operation mode to follow the ship's course if the ship's speed is slow. Together with the lasers which act as a scale in the images they also allow a reproducible scientific analysis since the transect can be plotted in a GIS system. Consequently, the area observed can be easily calculated. An operation as a predominantly drifting system, especially in areas with bottom near currents, is also possible, however, the connection of the tether at the rear of the vehicle is unsuitable for such conditions. The recovery of the system corresponds to that of the deployment. Most important is to reach the surface of the sea at a safe distance perpendicular to the ship's axis in order not to interfere with the ship's propellers. During this phase the Posidonia transponder system is of high relevance although it has to be switched off at a water depth of approx. 40 m. The minimum personal needed is 4 persons to handle the tether on deck, one person to operate the ship's winch, one pilot and one additional technician for the ROV's operation itself, one scientist, and one person on the ship's bridge in addition to one on deck for whale watching when the Posidonia system is in use. The time for the deployment of the ROV until it reaches the sea floor depends on the water depth and consequently on the length of the cable to be paid out beforehand and to be tightened to the single conductor cable. Deployment and recovery at intermediate water depths can last up to 2 hours each. A reasonable time for benthological observations close to the sea floor is 1 to 3 hours but can be extended if scientifically justified. Preliminary results: after a first test station, the ROV was deployed 3 times for observations related to the disturbance experiment. A first attempt to Cross the hilltop at the northern margin of the Norsel Bank close to the 4- Seasons Inlet was successful only for the first hundreds of metres transect length. The benthic community was dominated in biomass by the demosponge Cinachyra barbata. Due to the strong current of approx. 1 nm/h, the design of the system, and an expected more difficult current regime between grounded icebergs and the top of the hilltop the operation was stopped before the hilltop was reached. In a second attempt the hilltop was successfully crossed because the current and wind situation was much more suitable. In contrast to earlier expeditions with the "sprint" ROV it was the first time that both slopes, the smoother in the northeast and the steeper in the southwest were continuously observed during one cast. A coarse classification of the hilltop fauna shows patches dominated by single taxa: cnidarians, hydrozoans, holothurians, sea urchins and stalked sponges. Approximately 20 % of the north-eastern slope was devastated by grounding icebergs. Here the sediments consisted of large boulders, gravel or blocks of finer sediment looking like an irregularly ploughed field. On the Norsel Bank the Cinachyra concentrations were locally associated with high abundances of sea anemones. Total observation time amounted to 11.5 hours corresponding to almost 6-9 km transect length.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Idea Management Systems are web applications that implement the notion of open innovation though crowdsourcing. Typically, organizations use those kind of systems to connect to large communities in order to gather ideas for improvement of products or services. Originating from simple suggestion boxes, Idea Management Systems advanced beyond collecting ideas and aspire to be a knowledge management solution capable to select best ideas via collaborative as well as expert assessment methods. In practice, however, the contemporary systems still face a number of problems usually related to information overflow and recognizing questionable quality of submissions with reasonable time and effort allocation. This thesis focuses on idea assessment problem area and contributes a number of solutions that allow to filter, compare and evaluate ideas submitted into an Idea Management System. With respect to Idea Management System interoperability the thesis proposes theoretical model of Idea Life Cycle and formalizes it as the Gi2MO ontology which enables to go beyond the boundaries of a single system to compare and assess innovation in an organization wide or market wide context. Furthermore, based on the ontology, the thesis builds a number of solutions for improving idea assessment via: community opinion analysis (MARL), annotation of idea characteristics (Gi2MO Types) and study of idea relationships (Gi2MO Links). The main achievements of the thesis are: application of theoretical innovation models for practice of Idea Management to successfully recognize the differentiation between communities, opinion metrics and their recognition as a new tool for idea assessment, discovery of new relationship types between ideas and their impact on idea clustering. Finally, the thesis outcome is establishment of Gi2MO Project that serves as an incubator for Idea Management solutions and mature open-source software alternatives for the widely available commercial suites. From the academic point of view the project delivers resources to undertake experiments in the Idea Management Systems area and managed to become a forum that gathered a number of academic and industrial partners. Resumen Los Sistemas de Gestión de Ideas son aplicaciones Web que implementan el concepto de innovación abierta con técnicas de crowdsourcing. Típicamente, las organizaciones utilizan ese tipo de sistemas para conectar con comunidades grandes y así recoger ideas sobre cómo mejorar productos o servicios. Los Sistemas de Gestión de Ideas lian avanzado más allá de recoger simplemente ideas de buzones de sugerencias y ahora aspiran ser una solución de gestión de conocimiento capaz de seleccionar las mejores ideas por medio de técnicas colaborativas, así como métodos de evaluación llevados a cabo por expertos. Sin embargo, en la práctica, los sistemas contemporáneos todavía se enfrentan a una serie de problemas, que, por lo general, están relacionados con la sobrecarga de información y el reconocimiento de las ideas de dudosa calidad con la asignación de un tiempo y un esfuerzo razonables. Esta tesis se centra en el área de la evaluación de ideas y aporta una serie de soluciones que permiten filtrar, comparar y evaluar las ideas publicadas en un Sistema de Gestión de Ideas. Con respecto a la interoperabilidad de los Sistemas de Gestión de Ideas, la tesis propone un modelo teórico del Ciclo de Vida de la Idea y lo formaliza como la ontología Gi2MO que permite ir más allá de los límites de un sistema único para comparar y evaluar la innovación en un contexto amplio dentro de cualquier organización o mercado. Por otra parte, basado en la ontología, la tesis desarrolla una serie de soluciones para mejorar la evaluación de las ideas a través de: análisis de las opiniones de la comunidad (MARL), la anotación de las características de las ideas (Gi2MO Types) y el estudio de las relaciones de las ideas (Gi2MO Links). Los logros principales de la tesis son: la aplicación de los modelos teóricos de innovación para la práctica de Sistemas de Gestión de Ideas para reconocer las diferenciasentre comu¬nidades, métricas de opiniones de comunidad y su reconocimiento como una nueva herramienta para la evaluación de ideas, el descubrimiento de nuevos tipos de relaciones entre ideas y su impacto en la agrupación de estas. Por último, el resultado de tesis es el establecimiento de proyecto Gi2MO que sirve como incubadora de soluciones para Gestión de Ideas y herramientas de código abierto ya maduras como alternativas a otros sistemas comerciales. Desde el punto de vista académico, el proyecto ha provisto de recursos a ciertos experimentos en el área de Sistemas de Gestión de Ideas y logró convertirse en un foro que reunión para un número de socios tanto académicos como industriales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El tiempo de concentración de una cuenca sigue siendo relativamente desconocido para los ingenieros. El procedimiento habitual en un estudio hidrológico es calcularlo según varias fórmulas escogidas entre las existentes para después emplear el valor medio obtenido. De esta media se derivan los demás resultados hidrológicos, resultados que influirán en el futuro dimensionamiento de las infraestructuras. Este trabajo de investigación comenzó con el deseo de conseguir un método más fiable y objetivo que permitiera obtener el tiempo de concentración. Dada la imposibilidad de poner en práctica ensayos hidrológicos en una cuenca física real, ya que no resulta viable monitorizar perfectamente la precipitación ni los caudales de salida, se planteó llevar a cabo los ensayos de forma simulada, con el empleo de modelos hidráulicos bidimensionales de lluvia directa sobre malla 2D de volúmenes finitos. De entre todos los disponibles, se escogió InfoWorks ICM, por su rapidez y facilidad de uso. En una primera fase se efectuó la validación del modelo hidráulico elegido, contrastando los resultados de varias simulaciones con la formulación analítica existente. Posteriormente, se comprobaron los valores de los tiempos de concentración obtenidos con las expresiones referenciadas en la bibliografía, consiguiéndose resultados muy satisfactorios. Una vez verificado, se ejecutaron 690 simulaciones de cuencas tanto naturales como sintéticas, incorporando variaciones de área, pendiente, rugosidad, intensidad y duración de las precipitaciones, a fin de obtener sus tiempos de concentración y retardo. Esta labor se realizó con ayuda de la aceleración del cálculo vectorial que ofrece la tecnología CUDA (Arquitectura Unificada de Dispositivos de Cálculo). Basándose en el análisis dimensional, se agruparon los resultados del tiempo de concentración en monomios adimensionales. Utilizando regresión lineal múltiple, se obtuvo una nueva formulación para el tiempo de concentración. La nueva expresión se contrastó con las formulaciones clásicas, habiéndose obtenido resultados equivalentes. Con la exposición de esta nueva metodología se pretende ayudar al ingeniero en la realización de estudios hidrológicos. Primero porque proporciona datos de manera sencilla y objetiva que se pueden emplear en modelos globales como HEC-HMS. Y segundo porque en sí misma se ha comprobado como una alternativa realmente válida a la metodología hidrológica habitual. Time of concentration remains still fairly imprecise to engineers. A normal hydrological study goes through several formulae, obtaining concentration time as the median value. Most of the remaining hydrologic results will be derived from this value. Those results will determine how future infrastructures will be designed. This research began with the aim to acquire a more reliable and objective method to estimate concentration times. Given the impossibility of carrying out hydrological tests in a real watershed, due to the difficulties related to accurate monitoring of rainfall and derived outflows, a model-based approach was proposed using bidimensional hydraulic simulations of direct rainfall over a 2D finite-volume mesh. Amongst all of the available software packages, InfoWorks ICM was chosen for its speed and ease of use. As a preliminary phase, the selected hydraulic model was validated, checking the outcomes of several simulations over existing analytical formulae. Next, concentration time values were compared to those resulting from expressions referenced in the technical literature. They proved highly satisfactory. Once the model was properly verified, 690 simulations of both natural and synthetic basins were performed, incorporating variations of area, slope, roughness, intensity and duration of rainfall, in order to obtain their concentration and lag times. This job was carried out in a reasonable time lapse with the aid of the parallel computing platform technology CUDA (Compute Unified Device Architecture). Performing dimensional analysis, concentration time results were isolated in dimensionless monomials. Afterwards, a new formulation for the time of concentration was obtained using multiple linear regression. This new expression was checked against classical formulations, obtaining equivalent results. The publication of this new methodology intends to further assist the engineer while carrying out hydrological studies. It is effective to provide global parameters that will feed global models as HEC-HMS on a simple and objective way. It has also been proven as a solid alternative to usual hydrology methodology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Existe un amplio catálogo de posibles soluciones para resolver la problemática de las zapatas de medianería así como, por extensión, las zapatas de esquina como caso particular de las anteriores. De ellas, las más habitualmente empleadas en estructuras de edificación son, por un lado, la utilización de una viga centradora que conecta la zapata de medianería con la zapata del pilar interior más próximo y, por otro, la colaboración de la viga de la primera planta trabajando como tirante. En la primera solución planteada, el equilibrio de la zapata de medianería y el centrado de la respuesta del terreno se consigue gracias a la colaboración del pilar interior con su cimentación y al trabajo a flexión de la viga centradora. La modelización clásica considera que se logra un centrado total de la reacción del terreno, con distribución uniforme de las tensiones de contacto bajo ambas zapatas. Este planteamiento presupone, por tanto, que la viga centradora logra evitar cualquier giro de la zapata de medianería y que el pilar puede, por ello, considerarse perfectamente empotrado en la cimentación. En este primer modelo, el protagonismo fundamental recae en la viga centradora, cuyo trabajo a flexión conduce frecuentemente a unas escuadrías y a unas cuantías de armado considerables. La segunda solución, plantea la colaboración de la viga de la primera planta, trabajando como tirante. De nuevo, los métodos convencionales suponen un éxito total en el mecanismo estabilizador del tirante, que logra evitar cualquier giro de la zapata de medianería, dando lugar a una distribución de tensiones también uniforme. Los modelos convencionales existentes para el cálculo de este tipo de cimentaciones presentan, por tanto, una serie de simplificaciones que permiten el cálculo de las mismas, por medios manuales, en un tiempo razonable, pero presentan el inconveniente de su posible alejamiento del comportamiento real de la cimentación, con las consecuencias negativas que ello puede suponer en el dimensionamiento de estos elementos estructurales. La presente tesis doctoral desarrolla un contraste de los modelos convencionales de cálculo de cimentaciones de medianería y esquina, mediante un análisis alternativo con modelos de elementos finitos, con el objetivo de poner de manifiesto las diferencias entre los resultados obtenidos con ambos tipos de modelización, analizar cuáles son las variables que más influyen en el comportamiento real de este tipo de cimentaciones y proponer un nuevo modelo de cálculo, de tipo convencional, más ajustado a la realidad. El proceso de investigación se desarrolla mediante una etapa experimental virtual que utiliza como modelo un pórtico tipo de edificación, ortogonal, de hormigón armado, con dos vanos y número variable de plantas. Tras identificar el posible giro de la cimentación como elemento clave en el comportamiento de las zapatas de medianería y de esquina, se adoptan como variables de estudio aquellas que mayor influencia puedan tener sobre el citado giro de las zapatas y sobre la rigidez del conjunto del elemento estructural. Así, se han estudiado luces de 3 m a 7 m, diferente número de plantas desde baja+1 hasta baja+4, resistencias del terreno desde 100 kN/m2 hasta 300 kN/m2, relaciones de forma de la zapata de medianería de 1,5 : 1 y 2 : 1, aumento y reducción de la cuantía de armado de la viga centradora y variación del canto de la viga centradora desde el mínimo canto compatible con el anclaje de la armadura de los pilares hasta un incremento del 75% respecto del citado canto mínimo. El conjunto de pórticos generados al aplicar las variables indicadas, se ha calculado tanto por métodos convencionales como por el método de los elementos finitos. Los resultados obtenidos ponen de manifiesto importantes discrepancias entre ambos métodos que conducen a importantes diferencias en el dimensionamiento de este tipo de cimentaciones. El empleo de los métodos tradicionales da lugar, por un lado, a un sobredimensionamiento de la armadura de la viga centradora y, por otro, a un infradimensionamiento, tanto del canto de la viga centradora, como del tamaño de la zapata de medianería y del armado de la viga de la primera planta. Finalizado el análisis y discusión de resultados, la tesis propone un nuevo método alternativo, de carácter convencional y, por tanto, aplicable a un cálculo manual en un tiempo razonable, que permite obtener los parámetros clave que regulan el comportamiento de las zapatas de medianería y esquina, conduciendo a un dimensionamiento más ajustado a las necesidades reales de este tipo de cimentación. There is a wide catalogue of possible solutions to solve the problem of party shoes and, by extension, corner shoes as a special case of the above. From all of them, the most commonly used in building structures are, on one hand, the use of a centering beam that connects the party shoe with the shoe of the nearest interior pillar and, on the other hand, the collaboration of the beam of the first floor working as a tie rod. In the first proposed solution, the balance of the party shoe and the centering of the ground response is achieved thanks to the collaboration of the interior pillar with his foundation along with the bending work of the centering beam. Classical modeling considers that a whole centering of the ground reaction is achieved, with uniform contact stress distribution under both shoes. This approach to the issue presupposes that the centering beam manages to avoid any rotation of the party shoe, so the pillar can be considered perfectly embedded in the foundation. In this first model, the leading role lies in the centering beam, whose bending work usually leads to important section sizes and high amounts of reinforced. The second solution, consideres the collaboration of the beam of the first floor, working as tie rod. Again, conventional methods involve a total success in the stabilizing mechanism of the tie rod, that manages to avoid any rotation of the party shoe, resulting in a stress distribution also uniform. Existing conventional models for calculating such foundations show, therefore, a series of simplifications which allow calculation of the same, by manual means, in a reasonable time, but have the disadvantage of the possible distance from the real behavior of the foundation, with the negative consequences this could bring in the dimensioning of these structural elements. The present thesis develops a contrast of conventional models of calculation of party and corner foundations by an alternative analysis with finite element models with the aim of bring to light the differences between the results obtained with both types of modeling, analysis which are the variables that influence the real behavior of this type of foundations and propose a new calculation model, conventional type, more adjusted to reality. The research process is developed through a virtual experimental stage using as a model a typical building frame, orthogonal, made of reinforced concrete, with two openings and variable number of floors. After identifying the possible spin of the foundation as the key element in the behavior of the party and corner shoes, it has been adopted as study variables, those that may have greater influence on the spin of the shoes and on the rigidity of the whole structural element. So, it have been studied lights from 3 m to 7 m, different number of floors from lower floor + 1 to lower floor + 4, máximum ground stresses from 100 kN/m2 300 kN/m2, shape relationships of party shoe 1,5:1 and 2:1, increase and decrease of the amount of reinforced of the centering beam and variation of the height of the centering beam from the minimum compatible with the anchoring of the reinforcement of pillars to an increase of 75% from the minimum quoted height. The set of frames generated by applying the indicated variables, is calculated both by conventional methods such as by the finite element method. The results show significant discrepancies between the two methods that lead to significant differences in the dimensioning of this type of foundation. The use of traditional methods results, on one hand, to an overdimensioning of the reinforced of the centering beam and, on the other hand, to an underdimensioning, both the height of the centering beam, such as the size of the party shoe and the reinforced of the beam of the first floor. After the analysis and discussion of results, the thesis proposes a new alternative method, conventional type and, therefore, applicable to a manual calculation in a reasonable time, that allows to obtain the key parameters that govern the behavior of party and corner shoes, leading to a dimensioning more adjusted to the real needings of this type of foundation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta es la búsqueda de respuestas a esa duda constante: De dónde venimos y que hemos ido dejando por el camino. ¿Está todo claro en este recorrido o hemos actuado por acumulación de errores heredados de procesos anteriores? Es la investigación a través del descubrimiento de nuestro pasado, de nuestros orígenes en materia de seguridad de protección contra incendios, y sobre todo de ejecución de una arquitectura pensada para ser recorrida con mayor seguridad y ser evacuada en un tiempo razonable. El trabajo investiga, a nivel nacional, la evolución de la sociedad y sus efectos sobre la manera de interpretar el problema de la seguridad contra incendios en los edificios. El interés fundamentalmente es poner en claro todos aquellos aspectos que afectan a la evacuación de las personas. Para ello se han estudiado los principales hitos de actuación, las preocupaciones principales surgidas en cada momento y las soluciones adoptadas. Se ha comprobado su aplicación o su demora hasta que se ha producido el siguiente gran suceso que ha motivado una nueva revisión de los procedimientos de diseño y control. En primer lugar, tratando de relacionar los incendios fundamentales que han influido en nuestra forma de abordar el problema en distintos momentos de la historia de España. En segundo lugar, haciendo un recorrido sobre la figura del arquitecto y su participación en los medios de control y legislación sobre la manera de intervenir en el servicio de incendios o de resolver los edificios en materia de protección contra incendios o evacuación de sus ocupantes. En definitiva, descubriendo los escritos de algunos especialistas, fundamentales para entender nuestra manera de abordar el problema de la protección en los edificios, a lo largo de la historia. Se ha revisado como se han producido los siniestros más importantes en teatros y otros locales públicos .Analizando la forma en que los arquitectos implicados han tratado de resolver las posibles deficiencias ante el riesgo. Se trata de la tipología edificatoria donde, por primera vez, surge la preocupación por adoptar medidas y procedimientos de seguridad en caso de incendio. Resultan locales con una importante siniestralidad, donde se desarrolla la principal actividad lúdica del momento, y que por el importante número de personas que albergan, son fuente de preocupación entre el público y las autoridades. Otras cuestiones en un tema tan amplio, que quedan simplemente esbozadas en este trabajo de investigación, son los procedimientos de los sistemas de extinción, la estructura organizativa de la ciudad, las primeras sociedades de seguros de incendios, la aparición de patentes a partir del desarrollo industrial del siglo XIX. Todo ello, con el hilo conductor de la reglamentación que se ha ido elaborando al respecto. Al principio, sobre espectáculos públicos, acotando el punto de partida en la materia de nuestra reglamentación. Anticipando sistemas constructivos y datos dimensionales de la evacuación. Llegados a mediados del siglo XX, abordando otros usos. Haciendo seguimiento de la modernización de los procesos edificatorios y la reglamentación sectorial. Recabando información de las organizaciones profesionales que comienzan a reclamar una coordinación nacional de los sistemas preventivos y que desemboca en el Proyecto de Reglamento de prevención contra el fuego que nunca será publicado. Toda esta etapa, plagada de documentos de carácter voluntario u obligatorio, local y nacional, van definiendo los criterios dimensionales con los que debe resolverse los elementos arquitectónicos susceptibles de servir para la evacuación. Se trata de una etapa intensa en documentación, cambiante, sujeta a los criterios que establecen los países del entorno más avanzados en la materia. Las dos últimas décadas del siglo, acotadas por la transición política y varios siniestros de graves consecuencias, definen el proceso normativo que culmina con el código técnico de la edificación que hoy conocemos. Es un periodo de aprendizaje y asimilación del capítulo de la seguridad, donde los métodos son variados. Donde la intencionalidad última es trasladar un sistema prescriptivo a un modelo prestacional propio de la madurez en el análisis del tema y en las corrientes de los países del entorno. ABSTRACT This is the search for answers to that constant question: Where do we come from and what have left along the way? Has everything been clear on this journey, or have we acted as a result of a collection of errors learned from prior processes? This has been research through exploration of our past, of our origins regarding fire protection safety, and, above all, of the endeavour to utilize architecture aimed at offering the highest level of safety and evacuation in a reasonable time. This project has researched society’s change nationwide and its effects on how to interpret the difficulty of fire protection safety in buildings. Its focus has fundamentally been to clarify those aspects that affect the evacuation of people. To this end, the main milestones of action, the principal concerns that have arisen at each step, and the solutions taken have all been studied. A check was performed on their application; or their delay until a significant event occurred that prompted a new revision of design and control procedures. Firstly, this was done by attempting to connect the main fires that have influenced how we handle the problem at different times in Spain’s history. Secondly, an examination was done on the figure of the architect and his participation in the means of control and legislation on how to intercede in fire services, or how the architect finds solutions for buildings in terms of fire protection, or the evacuation of their occupants. In short, the written works of certain specialists, who are essential to our understanding of how to deal with the problem of protection in buildings, were explored throughout history. A study was done on the most significant disasters in theatres and other public establishments. This was done by analysing the way the architects involved have aimed to solve possible points liable to risk. It is a classification of building where, for the first time, the concern arose to adopt safety measures and procedures in the event of fires. Public establishments with considerable accident rates emerged. Here the main entertainment activities of the time took place. These spaces were a source of worry among the public and authorities due to the high number of persons they accommodated. Other issues in such an extensive subject, which are only outlined in this research study, are procedures in extinguishing systems, the organizational structure of cities, the first fire insurance companies, and the appearance of patents after the industrial development of the 19th century. All of these aspects are joined by the common thread of regulations that have been created in this matter. At the beginning, these regulations were for public shows, thus defining the starting point for our regulations. This was done in anticipation of structural systems and size data of evacuations. With the arrival of the mid-20th century, different uses were addressed. A modernization of construction processes and the industry regulations were tracked. Information was gathered from professional organizations that began to demand a national coordination of prevention systems which led to the Regulation Project on fire prevention which will never be published. Throughout this stage, replete with voluntary and compulsory documents, both on the local and national level, the dimensional criteria to be used in the resolution of architectural elements open to use in evacuation were defined. This was a period that was filled with documentation, change, and subject to the criteria that the most advanced countries in the field established in this regard. The final two decades of the century, marked by political transition and several accidents with grave consequences, defined the regulation process that culminated with the building technical code known today. This was a period of learning and understanding in the chapter of safety, where the methods are varied. In this chapter, the ultimate goal is to insert a prescriptive-based standard into a performance-based code suitable for cultivated experience in analysis of the subject and the tendencies in countries dealing with this field.