25 resultados para New approaches
Resumo:
Early ancestors of crop simulation models (De Wit, 1965; Monteith, 1965; Duncan et al., 1967) were born before primitive personal computers were available (e.g. Apple II released in 1977, IBM PC released in 1981). Paleo-computer programs were run in mainframes with the support of punch cards. As computers became more available and powerful, crop models evolved into sophisticated tools summarizing our understanding of how crops operate. This evolution was triggered by the need to answer new scientific questions and improve the accuracy of model simulations, especially under limiting conditions.
Resumo:
This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.
Resumo:
Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others.
Resumo:
Virtual reality (VR) techniques to understand and obtain conclusions of data in an easy way are being used by the scientific community. However, these techniques are not used frequently for analyzing large amounts of data in life sciences, particularly in genomics, due to the high complexity of data (curse of dimensionality). Nevertheless, new approaches that allow to bring out the real important data characteristics, arise the possibility of constructing VR spaces to visually understand the intrinsic nature of data. It is well known the benefits of representing high dimensional data in tridimensional spaces by means of dimensionality reduction and transformation techniques, complemented with a strong component of interaction methods. Thus, a novel framework, designed for helping to visualize and interact with data about diseases, is presented. In this paper, the framework is applied to the Van't Veer breast cancer dataset is used, while oncologists from La Paz Hospital (Madrid) are interacting with the obtained results. That is to say a first attempt to generate a visually tangible model of breast cancer disease in order to support the experience of oncologists is presented.
Resumo:
Computer Fluid Dynamics tools have already become a valuable instrument for Naval Architects during the ship design process, thanks to their accuracy and the available computer power. Unfortunately, the development of RANSE codes, generally used when viscous effects play a major role in the flow, has not reached a mature stage, being the accuracy of the turbulence models and the free surface representation the most important sources of uncertainty. Another level of uncertainty is added when the simulations are carried out for unsteady flows, as those generally studied in seakeeping and maneuvering analysis and URANS equations solvers are used. Present work shows the applicability and the benefits derived from the use of new approaches for the turbulence modeling (Detached Eddy Simulation) and the free surface representation (Level Set) on the URANS equations solver CFDSHIP-Iowa. Compared to URANS, DES is expected to predict much broader frequency contents and behave better in flows where boundary layer separation plays a major role. Level Set methods are able to capture very complex free surface geometries, including breaking and overturning waves. The performance of these improvements is tested in set of fairly complex flows, generated by a Wigley hull at pure drift motion, with drift angle ranging from 10 to 60 degrees and at several Froude numbers to study the impact of its variation. Quantitative verification and validation are performed with the obtained results to guarantee their accuracy. The results show the capability of the CFDSHIP-Iowa code to carry out time-accurate simulations of complex flows of extreme unsteady ship maneuvers. The Level Set method is able to capture very complex geometries of the free surface and the use of DES in unsteady simulations highly improves the results obtained. Vortical structures and instabilities as a function of the drift angle and Fr are qualitatively identified. Overall analysis of the flow pattern shows a strong correlation between the vortical structures and free surface wave pattern. Karman-like vortex shedding is identified and the scaled St agrees well with the universal St value. Tip vortices are identified and the associated helical instabilities are analyzed. St using the hull length decreases with the increase of the distance along the vortex core (x), which is similar to results from other simulations. However, St scaled using distance along the vortex cores shows strong oscillations compared to almost constants for those previous simulations. The difference may be caused by the effect of the free-surface, grid resolution, and interaction between the tip vortex and other vortical structures, which needs further investigations. This study is exploratory in the sense that finer grids are desirable and experimental data is lacking for large α, especially for the local flow. More recently, high performance computational capability of CFDSHIP-Iowa V4 has been improved such that large scale computations are possible. DES for DTMB 5415 with bilge keels at α = 20º were conducted using three grids with 10M, 48M and 250M points. DES analysis for flows around KVLCC2 at α = 30º is analyzed using a 13M grid and compared with the results of DES on the 1.6M grid by. Both studies are consistent with what was concluded on grid resolution herein since dominant frequencies for shear-layer, Karman-like, horse-shoe and helical instabilities only show marginal variation on grid refinement. The penalties of using coarse grids are smaller frequency amplitude and less resolved TKE. Therefore finer grids should be used to improve V&V for resolving most of the active turbulent scales for all different Fr and α, which hopefully can be compared with additional EFD data for large α when it becomes available.
Resumo:
This paper outlines the problems found in the parallelization of SPH (Smoothed Particle Hydrodynamics) algorithms using Graphics Processing Units. Different results of some parallel GPU implementations in terms of the speed-up and the scalability compared to the CPU sequential codes are shown. The most problematic stage in the GPU-SPH algorithms is the one responsible for locating neighboring particles and building the vectors where this information is stored, since these specific algorithms raise many dificulties for a data-level parallelization. Because of the fact that the neighbor location using linked lists does not show enough data-level parallelism, two new approaches have been pro- posed to minimize bank conflicts in the writing and subsequent reading of the neighbor lists. The first strategy proposes an efficient coordination between CPU-GPU, using GPU algorithms for those stages that allow a straight forward parallelization, and sequential CPU algorithms for those instructions that involve some kind of vector reduction. This coordination provides a relatively orderly reading of the neighbor lists in the interactions stage, achieving a speed-up factor of x47 in this stage. However, since the construction of the neighbor lists is quite expensive, it is achieved an overall speed-up of x41. The second strategy seeks to maximize the use of the GPU in the neighbor's location process by executing a specific vector sorting algorithm that allows some data-level parallelism. Al- though this strategy has succeeded in improving the speed-up on the stage of neighboring location, the global speed-up on the interactions stage falls, due to inefficient reading of the neighbor vectors. Some changes to these strategies are proposed, aimed at maximizing the computational load of the GPU and using the GPU texture-units, in order to reach the maximum speed-up for such codes. Different practical applications have been added to the mentioned GPU codes. First, the classical dam-break problem is studied. Second, the wave impact of the sloshing fluid contained in LNG vessel tanks is also simulated as a practical example of particle methods
Resumo:
Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper filis this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compiletime/ run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, aimed at performing as much work as possible at compile-time. The approach is based on the knowledge of certain properties regarding the run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed. Thus, the paper does not deal with the analysis itself, but rather with how the analysis results can be used to parallelize programs.
Resumo:
Logic programming systems which exploit and-parallelism among non-deterministic goals rely on notions of independence among those goals in order to ensure certain efficiency properties. "Non-strict" independence (NSI) is a more relaxed notion than the traditional notion of "strict" independence (SI) which still ensures the relevant efficiency properties and can allow considerable more parallelism than SI. However, all compilation technology developed to date has been based on SI, presumably because of the intrinsic complexity of exploiting NSI. This is related to the fact that NSI cannot be determined "a priori" as SI. This paper fills this gap by developing a technique for compile-time detection and annotation of NSI. It also proposes algorithms for combined compile- time/run-time detection, presenting novel run-time checks for this type of parallelism. Also, a transformation procedure to eliminate shared variables among parallel goals is presented, attempting to perform as much work as possible at compiletime. The approach is based on the knowledge of certain properties about run-time instantiations of program variables —sharing and freeness— for which compile-time technology is available, with new approaches being currently proposed.
Resumo:
Resumen: en este trabajo se presentan nuevas estrategias de enseñanza y aprendizaje a través de las nuevas tecnologías en su variante virtual a distancia (e-learning) implementadas en asignaturas relacionadas con la Geología. El objetivo básico fue acercar los aspectos geológicos a los estudiantes mediante el empleo de estas tecnologías. Se ha observado una mayor motivación y adquisición de conocimientos geológicos por parte del alumnado, que se ha traducido en una mejora en las calificaciones. Abstract: This paper deals with new teaching and learning approaches through the use of new technologies, mainly virtual distance learning (e-learning) in courses related to Geology. The main objective is to bring the geological aspects of Nature to students using these technologies. These new approaches have produced an increase in student motivation and acquisition of geological knowledge, accompanied by an improvement in their grades.
Resumo:
Los cambios percibidos hacia finales del siglo XX y a principios del nuevo milenio, nos ha mostrado que la crisis cultural de la que somos participes refleja también una crisis de los modelos universales. Nuestra situación contemporánea, parece indicar que ya no es posible formular un sistema estético para atribuirle una vigencia universal e intemporal más allá de su estricta eficacia puntual. La referencia organizada, delimitada, invariable y específica que ofrecía cualquier emplazamiento, en tanto preexistencia, reflejaba una jerarquía del sistema formal basado en lo extensivo: la medida, las normas, el movimiento, el tiempo, la modulación, los códigos y las reglas. Sin embargo, actualmente, algunos aspectos que permanecían latentes sobre lo construido, emergen bajo connotaciones intensivas, transgrediendo la simple manifestación visual y expresiva, para centrase en las propiedades del comportamiento de la materia y la energía como determinantes de un proceso de adaptación en el entorno. A lo largo del todo el siglo XX, el desarrollo de la relación del proyecto sobre lo construido ha sido abordado, casi en exclusiva, entre acciones de preservación o intervención. Ambas perspectivas, manifestaban esfuerzos por articular un pensamiento que diera una consistencia teórica, como soporte para la producción de la acción aditiva. No obstante, en las últimas décadas de finales de siglo, la teoría arquitectónica terminó por incluir pensamientos de otros campos que parecen contaminar la visión sesgada que nos refería lo construido. Todo este entramado conceptual previo, aglomeraba valiosos intentos por dar contenido a una teoría que pudiese ser entendida desde una sola posición argumental. Es así, que en 1979 Ignasi Solá-Morales integró todas las imprecisiones que referían una actuación sobre una arquitectura existente, bajo el termino de “intervención”, el cual fue argumentado en dos sentidos: El primero referido a cualquier tipo de actuación que se puede hacer en un edificio, desde la defensa, preservación, conservación, reutilización, y demás acciones. Se trata de un ámbito donde permanece latente el sentido de intensidad, como factor común de entendimiento de una misma acción. En segundo lugar, más restringido, la idea de intervención se erige como el acto crítico a las ideas anteriores. Ambos representan en definitiva, formas de interpretación de un nuevo discurso. “Una intervención, es tanto como intentar que el edificio vuelva a decir algo o lo diga en una determinada dirección”. A mediados de 1985, motivado por la corriente de revisión historiográfica y la preocupación del deterioro de los centros históricos que recorría toda Europa, Solá-Morales se propone reflexionar sobre “la relación” entre una intervención de nueva arquitectura y la arquitectura previamente existente. Relación condicionada estrictamente bajo consideraciones lingüísticas, a su entender, en sintonía con toda la producción arquitectónica de todo el siglo XX. Del Contraste a la Analogía, resumirá las transformaciones en la concepción discursiva de la intervención arquitectónica, como un fenómeno cambiante en función de los valores culturales, pero a su vez, mostrando una clara tendencia dialógica entres dos categorías formales: El Contraste, enfatizando las posibilidades de la novedad y la diferencia; y por otro lado la emergente Analogía, como una nueva sensibilidad de interpretación del edificio antiguo, donde la semejanza y la diversidad se manifiestan simultáneamente. El aporte reflexivo de los escritos de Solá-Morales podría ser definitivo, si en las últimas décadas antes del fin de siglo, no se hubiesen percibido ciertos cambios sobre la continuidad de la expresión lingüística que fomentaba la arquitectura, hacia una especie de hipertrofia figurativa. Entre muchos argumentos: La disolución de la consistencia compositiva y el estilo unitario, la incorporación volumétrica del proyecto como dispositivo reactivo, y el cambio de visión desde lo retrospectivo hacia lo prospectivo que sugiere la nueva conservación. En este contexto de desintegración, el proyecto, en tanto incorporación o añadido sobre un edificio construido, deja de ser considerado como un apéndice volumétrico subordinado por la reglas compositivas y formales de lo antiguo, para ser considerado como un organismo de orden reactivo, que produce en el soporte existente una alteración en su conformación estructural y sistémica. La extensión, antes espacial, se considera ahora una extensión sensorial y morfológica con la implementación de la tecnología y la hiper-información, pero a su vez, marcados por una fuerte tendencia de optimización energética en su rol operativo, ante el surgimiento del factor ecológico en la producción contemporánea. En una sociedad, como la nuestra, que se está modernizando intensamente, es difícil compartir una adecuada sintonía con las formas del pasado. Desde 1790, fecha de la primera convención francesa para la conservación de monumentos, la escala de lo que se pretende preservar es cada vez más ambiciosa, tanto es así, que al día de hoy el repertorio de lo que se conserva incluye prácticamente todas las tipologías del entorno construido. Para Koolhaas, el intervalo entre el objeto y el momento en el cual se decide su conservación se ha reducido, desde dos milenios en 1882 a unas décadas hoy en día. En breve este lapso desaparecerá, demostrando un cambio radical desde lo retrospectivo hacia lo prospectivo, es decir, que dentro de poco habrá que decidir que es lo que se conserva antes de construir. Solá-Morales, en su momento, distinguió la relación entre lo nuevo y lo antiguo, entre el contraste y la analogía. Hoy casi tres décadas después, el objetivo consiste en evaluar si el modelo de intervención arquitectónica sobre lo construido se ha mantenido desde entonces o si han aparecido nuevas formas de posicionamiento del proyecto sobre lo construido. Nuestro trabajo pretende demostrar el cambio de enfoque proyectual con la preexistencia y que éste tiene estrecha relación con la incorporación de nuevos conceptos, técnicas, herramientas y necesidades que imprimen el contexto cultural, producido por el cambio de siglo. Esta suposición nos orienta a establecer un paralelismo arquitectónico entre los modos de relación en que se manifiesta lo nuevo, entre una posición comúnmente asumida (Tópica), genérica y ortodoxa, fundamentada en lo visual y expresivo de las últimas décadas del siglo XX, y una realidad emergente (Heterotópica), extraordinaria y heterodoxa que estimula lo inmaterial y que parece emerger con creciente intensidad en el siglo XXI. Si a lo largo de todo el siglo XX, el proyecto de intervención arquitectónico, se debatía entre la continuidad y discontinuidad de las categorías formales marcadas por la expresión del edificio preexistente, la nueva intervención contemporánea, como dispositivo reactivo en el paisaje y en el territorio, demanda una absoluta continuidad, ya no visual, expresiva, ni funcional, sino una continuidad fisiológica de adaptación y cambio con la propia dinámica del territorio, bajo nuevas reglas de juego y desplegando planes y estrategias operativas (proyectivas) desde su propia lógica y contingencia. El objeto de esta investigación es determinar los nuevos modos de continuidad y las posibles lógicas de producción que se manifiestan dentro de la Intervención Arquitectónica, intentando superar lo aparente de su relación física y visual, como resultado de la incorporación del factor operativo desplegado por el nuevo dispositivo contemporáneo. Creemos que es acertado mantener la senda connotativa que marca la denominación intervención arquitectónica, por aglutinar conceptos y acercamientos teóricos previos que han ido evolucionando en el tiempo. Si bien el término adolece de mayor alcance operativo desde su formulación, una cualidad que infieren nuestras lógicas contemporáneas, podría ser la reformulación y consolidación de un concepto de intervención más idóneo con nuestros tiempos, anteponiendo un procedimiento lógico desde su propia necesidad y contingencia. Finalmente, nuestro planteamiento inicial aspira a constituir un nueva forma de reflexión que nos permita comprender las complejas implicaciones que infiere la nueva arquitectura sobre la preexistencia, motivada por las incorporación de factores externos al simple juicio formal y expresivo preponderante a finales del siglo XX. Del mismo modo, nuestro camino propuesto, como alternativa, permite proyectar posibles sendas de prospección, al considerar lo preexistente como un ámbito que abarca la totalidad del territorio con dinámicas emergentes de cambio, y con ellas, sus lógicas de intervención.Abstract The perceived changes towards the end of the XXth century and at the beginning of the new milennium have shown us that the cultural crisis in which we participate also reflects a crisis of the universal models. The difference between our contemporary situation and the typical situations of modern orthodoxy and post-modernistic fragmentation, seems to indicate that it is no longer possible to formulate a valid esthetic system, to assign a universal and eternal validity to it beyond its strictly punctual effectiveness; which is even subject to questioning because of the continuous transformations that take place in time and in the sensibility of the subject itself every time it takes over the place. The organised reference that any location offered, limited, invariable and specific, while pre-existing, reflected a hierarchy of the formal system based on the applicable: measure, standards, movement, time, modulation, codes and rules. Authors like Marshall Mc Luhan, Paul Virilio, or Marc Augé anticipated a reality where the conventional system already did not seem to respond to the new architectural requests in which information, speed, disappearance and the virtual had blurred the traditional limits of place; pre-existence did no longer possess a specific delimitation and, on the contrary, they expect to reach a global scale. Currently, some aspects that stayed latent relating to the constructed, surface from intensive connotations, transgressing the simple visual and expressive manifestation in order to focus on the traits of the behaviour of material and energy as determinants of a process of adaptation to the surroundings. Throughout the entire Century, the development of the relation of the project relating to the constructed has been addressed, almost exclusively, in preservational or interventianal actions. Both perspectives showed efforts in order to express a thought that would give a theoretical consistency as a base for the production of the additive action. Nevertheless, the last decades of the Century, architectural theory ended up including thoughts from other fields that seem to contaminate the biased vision 15 which the constructed related us. Ecology, planning, philosophy, global economy, etc, suggest new approaches to the construction of the contemporary city; but this time with a determined idea of change and continuous transformation, that enriches the panorama of thought and architectural practice, at the same time, according to some, it puts disciplinary specification at risk, given that there is no architecture without destruction, the constructed organism requires mutation in order to adjust to the change of shape. All of this previous conceptual framework gathered valuable intents to give importance to a theory that could be understood solely from an argumental position. Thusly, in 1979 Ignasi Solá-Morales integrated all of the imprecisions that referred to an action in existing architecture under the term of “Intervention”, which was explained in two ways: The first referring to any type of intervention that can be carried out in a building, regarding protection, conservation, reuse, etc. It is about a scope where the meaning of intensity stays latent as a common factor of the understanding of a single action. Secondly, more limitedly, the idea of intervention is established as the critical act to the other previous ideas such as restauration, conservation, reuse, etc. Both ultimately represent ways of interpretation of a new speech. “An intervention, is as much as trying to make the building say something again or that it be said in a certain direction”. Mid 1985, motivated by the current of historiographical revision and the concerns regarding the deterioration of historical centres that traversed Europe, Solá-Morales decides to reflect on “the relationship” between an intervention of the new architecture and the previously existing architecture. A relationship determined strictly by linguistic considerations, to his understanding, in harmony with all of the architectural production of the XXth century. From Contrast to Analogy would summarise transformations in the discursive perception of architectural intervention, as a changing phenomenon depending on cultural values, but at the same time, showing a clear dialogical tendency between two formal categories: Contrast, emphasising the possibilities of novelty and difference; and on the other hand the emerging Analogy, as a new awareness of interpretation of the ancient building, where the similarity and diversity are manifested simultaneously. For Solá-Morales the analogical procedure is not based on the visible simultaneity of formal orders, but on associations that the subject establishes throughout time. Through analogy it is tried to overcome the simple visual relationship with the antique, to focus on its spacial, physical and geographical nature. If the analogical attempt guides an opening towards a new continuity; it still persists in the connection of dimensional, typological and figurative factors, subordinate to the formal hierarchy of the preexisting subjects. 16 The reflexive contribution of Solá-Morales’ works could be final, if in the last decades before the end of the century there had not been certain changes regarding linguistic expression, encouraged by architecture, towards a kind of figurative hypertrophy, amongst many arguments we are in this case interested in three moments: The dissolution of the compositional consistency and the united style, the volumetric incorporation of the project as a reactive mechanism, and the change of the vision from retrospective towards prospective that the new conservation suggests. The recurrence to the history of architecture and its recognisable forms, as a way of perpetuating memory and establishing a reference, dissolved any instinct of compositive unity and style, provoking permanent relationships to tend to disappear. The composition and coherence lead to suppose a type of discontinuity of isolated objects in which only possible relationships could appear; no longer as an order of certain formal and compositive rules, but as a special way of setting elements in a specific work. The new globalised field required new forms of consistency between the project and the pre-existent subject, motivated amongst others by the higher pace of market evolution, increase of consumer tax and the level of information and competence between different locations; aspects which finally made stylistic consistence inefficient. In this context of disintegration, the project, in incorporation as well as added to a constructed building, stops being considered as a volumetric appendix subordinate to compositive and formal rules of old, to be considered as an organism of reactive order, that causes a change in the structural and systematic configuration of the existing foundation. The extension, previsouly spatial, is now considered a sensorial and morphological extension, with the implementation of technology and hyper-information, but at the same time, marked by a strong tendency of energetic optimization in its operational role, facing the emergence of the ecological factor in contemporary production. The technological world turns into a new nature, a nature that should be analysed from ecological terms; in other words, as an event of transition in the continuous redistribution of energy. In this area, effectiveness is not only determined by the capacity of adaptation to changing conditions, but also by its transforming capacity “expressly” in order to change an environment. In a society, like ours, that is modernising intensively, it is difficult to share an adecuate agreement with the forms of the past. From 1790, the date of the first French convention for the conservation of monuments, the scale of what is expexted to be preserved is more and more ambitious, so much so that nowadays the repertoire of that what is conserved includes practically all typologies of the constructed surroundings. For Koolhaas, the ínterval between the object and the moment when its conservation is decided has been reduced, from two 17 milennia in 1882 to a few decades nowadays. Shortly this lapse will disappear, showing a radical change of retrospective towards prospective, that is to say, that soon it will be necessary to decide what to conserve before constructing. The shapes of cities are the result of the continuous incorporation of architecture, and perhaps that only through architecture the response to the universe can be understood, the continuity of what has already been constructed. Our work is understood also within that system, modifying the field of action and leaving the road ready for the next movement of those that will follow after us. Continuity does not mean conservatism, continuity means being conscient of the transitory value of our answers to specific needs, accepting the change that we have received. That what has been constructed to remain and last, should cause future interventions to be integrated in it. It is necessary to accept continuity as a rule. Solá-Morales, in his time, distinguished between the relationship with new and old, between contrast and analogy. Today, almost three decades later, the objective consists of evaluating whether the model of architectural intervention in the constructed has been maintained since then or if new ways of positioning the project regarding the constructed have appeared. Our work claims to show the change of the approach of projects with pre-existing subjects and that this has got a close relation to the incorporation of new concepts, techniques, tools and necessities that impress the cultural context, caused by the change of centuries. This assumption guides us to establish a parallelism between the forms of connection where that what is new is manifested between a commonly assumed (topical), generic and orthodox position, based on that what is visual and expressive in the last decades of the XXth century, and an emerging (heterotopical), extraordinary and heterodox reality that stimulates the immaterial and that seems to emerge with growing intensity in the XXIst century. If throughout the XXth century the project of architectural intervention was considered from the continuity and discontinuity of formal categories, marked by the expression of the pre-existing building, the new contemporary intervention, as a reactive device in the landscape and territory, demands an absolute continuity. No longer a visual, expressive or functional one but a morphological continuity of adaptation and change with its own territorial dynamics, under new game rules and unfolding new operative (projective) strategies from its own logic and contingency. 18 The aim of this research is to determine new forms of continuity and the possible logic of production that are expressed in the Architectural Intervention, trying to overcome the obviousness of its physical and visual relationship, at the beginning of this new century, as a result of the incorporation of the operative factor that the new architectural device unfolds. We think it is correct to maintain the connotative path that marks the name architectural intervention by bringing previous concepts and theorical approaches that have been evolving through time together. If the name suffers from a wider operational range because of its formulation, a quality that our contemporary logic provokes, the reformulation and consolidation of an interventional concept could be more suitable for our times, giving preference to a logical method from its own necessity and contingency. It seems that now time shapes the topics, it is no longer about materialising a certain time but about expressing the changes that its new temporality generates. Finally, our initial approach aspires to form a new way of reflection that permits us to understand the complex implications that the new architecture submits the pre-existing subject to, motivated by the incorporation of factors external to simple formal and expressive judgement, prevailing at the end of the XXth century. In the same way, our set road, as an alternative, permits the contemplation of possible research paths, considering that what is pre-existing as an area that spans the whole territory with emerging changing dynamics and, with them, their interventional logics.
Resumo:
Survey Engineering curricula involves the integration of many formal disciplines at a high level of proficiency. The Escuela de Ingenieros en Topografía, Cartografía y Geodesia at Universidad Politécnica de Madrid (Survey Engineering) has developed an intense and deep teaching on so-called Applied Land Sciences and Technologies or Land Engineering. However, new approaches are encouraged by the European Higher Education Area (EHEA). This fact requires a review of traditional teaching and methods. Furthermore, the new globalization and international approach gives new ways to this discipline to teach and learn about how to bridge gap between cultures and regions. This work is based in two main needs. On one hand, it is based on integration of basic knowledge and disciplines involved in typical Survey Engineering within Land Management. On the other, there is an urgent need to consider territory on a social and ethical basis, as far as a part of the society, culture, idiosyncrasy or economy. The integration of appropriate knowledge of the Land Management is typically dominated by civil engineers and urban planners. It would be very possible to integrate Survey Engineering and Cooperation for Development in the framework of Land Management disciplines. Cooperation for Development is a concept that has changed since beginning of its use until now. Development projects leave an impact on society in response to their beneficiaries and are directed towards self-sustainability. Furthermore, it is the true bridge to reduce gap between societies when differences are immeasurable. The concept of development has also been changing and nowadays it is not a purely economic concept. Education, science and technology are increasingly taking a larger role in what is meant by development. Moreover, it is commonly accepted that Universities should transfer knowledge to society, and the transfer of knowledge should be open to countries most in need for developing. If the importance of the country development is given by education, science and technology, knowledge transfer would be one of the most clear of ways of Cooperation for Development. Therefore, university cooperation is one of the most powerful tools to achieve it, placing universities as agents of development. In Spain, the role of universities as agents of development and cooperation has been largely strengthened. All about this work deals to how to implement both Cooperation for Development and Land Management within Survey Engineering at the EHEA framework.
Resumo:
Esta tesis se basa en la hipótesis de que la modernidad arquitectónica en México no es, como se ha pretendido, una historia homogénea, centrada en un puñado de figuras clave, sino una multiplicidad de narrativas complejas en las cuales el arte y los medios impresos juegan un papel esencial. Por tanto, se propone una nueva mirada sobre la arquitectura del siglo XX en México a partir de la relación con la fotografía, el dibujo, las ideas y los medios. La tesis se plantea con el fin de vincular la arquitectura con los movimientos artísticos relevantes, los autores con las publicaciones, las formas con los manifiestos. Asímismo, uno de los principales intereses es explorar los conceptos de modernidad y de identidad, como parte de la construcción misma de la arquitectura de dicha época y del concepto de “lo mexicano”. A pesar del énfasis que se ha dado en la construcción de un canon, muchas veces ligado a la noción de monumentalidad, regionalismo, y mestizaje, este trabajo parte de una mirada puesta no en las formas sino en los procesos. A partir de las conexiones entre distintas capas de información, se buscan nuevas maneras de abordar el proyecto arquitectónico. El crítico de arquitectura brasileño Hugo Segawa ha descrito la investigación sobre la arquitectura latinoamericana como “una tarea más de índole arqueológica que historiográfica”, sin embargo, también ha calificado a México como “el más vigoroso centro de debates teóricos en Latinoamérica a lo largo del siglo XX.” Ese descompas entre la ruina y el vigor, entre la abundancia de producción y la precariedad de su conservación, ha definido no solo el estudio de la arquitectura sino las propias formas de creación. Por tanto, la tesis se plantea como una nueva plataforma desde la cual sea posible reformular la arquitectura, lejos de su condición amnésica, pensada en cambio, como un sistema basado en una misma voluntad por indagar y crear. Se busca, siguiendo al crítico británico Anthony Vidler, “relacionar” la historia con el proyecto. Con el fin de quitarle lo escurridizo a una historia incompleta y sobre todo de poder entender la manera en que las ideas se convierten en forma o en objeto, la tesis se estructura a partir de 22 líneas de tiempo organizadas en tres recorridos que se cruzan: arquitectura; arte y pensamiento. A partir de referencias como el Atlas Mnemosyne de Aby Wargurg o la serie Asterisms del artista Gabriel Orozco, se crean nuevos dispositivos para ver. De tal manera, se desdoblan los distintos temas para trazar relaciones entre la ciudad, los edificios, las utopías, las publicaciones y la publicidad. El trabajo se construye como un nuevo instrumento de exploración articulado por medio de capas, como un mapa genealógico evolutivo. El objetivo es abarcar aquella arquitectura construida no sólo en la ciudad sino también en el papel. Iniciando con el trabajo de la generación que llevó la arquitectura al siglo XX, el estudio se extiende a manera de epílogo hasta la primera década del siglo XXI, reuniendo obras que normalmente se han visto de manera aislada para entenderlas en su contexto más amplio. Como escenario de búsquedas, esta tesis intenta provocar el cruce de significados, creyendo imprescindible una nueva reflexión en torno a la disciplina y a los escenarios en los cuales se inscribe. La arquitectura de México –un país que en el siglo XX pasó de tener 13 millones de habitantes a 100 millonescorresponde esencialmente a una producción anónima, o bien, fabricada a partir de estereotipos. Pero entre la mancha de desarrollo informal y el hito reconocible está un trabajo tan amplio como inexplorado. Por tanto, se ofrece una serie de nuevas constelaciones que comprenden desde la Revolución de 1910 a los Juegos Olímpicos de 1968; del terremoto de la ciudad de México en 1985 a los concursos internacionales de las últimas décadas. This thesis’ hypothesis states that architectural modernity in Mexico is not, as sometimes pretended, a homogeneous history, focused on some key figures, but rather a multiple and complex narrative, in which art and print media have played an essential role. Therefore, it proposes a new perspective on 20th century architecture in Mexico analized through the relationship between architecture and photography, art, theory and media. Its aim is to link architecture and artistic movements, authors and publications, forms and manifestos. What is intended here is to explore the concepts of ‘modernity’ and ‘identity’ as part of the construction of architecture and the concept of ‘Mexicanity’. Despite the emphasis that has been given to the construction of an architectural canon —mostly related to the notions of monumentality, regionalism and mestizaje/métissage— this thesis’ approach is focused mainly in processes and not in forms. Through connections between diverse layers of information, new ways of dealing with the architectural project are explored. Brazilian architecture critic Hugo Segawa has described the research on Latin American architecture as «more a task of archaeology than of historiography». Nonetheless, he has also described Mexico as «the most vigorous center of theoretical debates in Latin America throughout the 20th century». This acute discrepancy between decay and vigor, between abundance of production and precarious state of conservation has determined not only the ways in which architecture is studied and understood but also the process of architectural creation. This work is therefore outlined as a new platform in order to reformulate the discipline as a system based on a common will to research and create, far from the existing amnesiac attitude. Following British critic Anthony Vidler, the interest relies in the attempt to ‘relate’ History to project. In order to reduce the elusiveness of an incomplete history and, specially, to understand how ideas become forms and objects, this thesis is composed of 22 timelines organized in three intersecting itineraries: Architecture, Art and Theory. Drawing inspiration from Aby Warburg’s Atlas Mnemosyne and Gabriel Orozco’s series Asterisms, new exploration devices are created. In such a way, diverse topics unfold to draw connections between built environment, utopian projects, publications, photography and publicity. This work is developed as a new tool for exploration, articulated by layers, like an evolutionary genealogy map. Its objective is to analyze not only the architecture build in cities, but produced on paper. Starting with the work of the generation that led Mexican architecture into the 20th century, this research extends until the first decade of the 21st century (the epilogue), gathering together works which have been usually seen in isolation, and therefore making possible its understanding in a broader context. As a scenario for exploration, this work tries to prompt the crossing of meanings, in the belief that new approaches on the discipline and its context are needed. Architecture in Mexico — a country whose population grew in the 20th century form 13 to 100 million— is related essentially with an anonymous production, or else made from stereotypes. However, between the sprawl of informal urban developments and landmark buildings there is an architectural production as extensive as it is unexamined. This essay introduces a series of new constellations, ranging from the Revolution in 1910 to the Olympic Games in 1968; from the earthquake in Mexico City in 1985 to the international competitions of the last decade. These myriad perspectives present buildings that were never built, forgotten writings, iconic images and unpublished material.
Resumo:
Los avances en el hardware permiten disponer de grandes volúmenes de datos, surgiendo aplicaciones que deben suministrar información en tiempo cuasi-real, la monitorización de pacientes, ej., el seguimiento sanitario de las conducciones de agua, etc. Las necesidades de estas aplicaciones hacen emerger el modelo de flujo de datos (data streaming) frente al modelo almacenar-para-despuésprocesar (store-then-process). Mientras que en el modelo store-then-process, los datos son almacenados para ser posteriormente consultados; en los sistemas de streaming, los datos son procesados a su llegada al sistema, produciendo respuestas continuas sin llegar a almacenarse. Esta nueva visión impone desafíos para el procesamiento de datos al vuelo: 1) las respuestas deben producirse de manera continua cada vez que nuevos datos llegan al sistema; 2) los datos son accedidos solo una vez y, generalmente, no son almacenados en su totalidad; y 3) el tiempo de procesamiento por dato para producir una respuesta debe ser bajo. Aunque existen dos modelos para el cómputo de respuestas continuas, el modelo evolutivo y el de ventana deslizante; éste segundo se ajusta mejor en ciertas aplicaciones al considerar únicamente los datos recibidos más recientemente, en lugar de todo el histórico de datos. En los últimos años, la minería de datos en streaming se ha centrado en el modelo evolutivo. Mientras que, en el modelo de ventana deslizante, el trabajo presentado es más reducido ya que estos algoritmos no sólo deben de ser incrementales si no que deben borrar la información que caduca por el deslizamiento de la ventana manteniendo los anteriores tres desafíos. Una de las tareas fundamentales en minería de datos es la búsqueda de agrupaciones donde, dado un conjunto de datos, el objetivo es encontrar grupos representativos, de manera que se tenga una descripción sintética del conjunto. Estas agrupaciones son fundamentales en aplicaciones como la detección de intrusos en la red o la segmentación de clientes en el marketing y la publicidad. Debido a las cantidades masivas de datos que deben procesarse en este tipo de aplicaciones (millones de eventos por segundo), las soluciones centralizadas puede ser incapaz de hacer frente a las restricciones de tiempo de procesamiento, por lo que deben recurrir a descartar datos durante los picos de carga. Para evitar esta perdida de datos, se impone el procesamiento distribuido de streams, en concreto, los algoritmos de agrupamiento deben ser adaptados para este tipo de entornos, en los que los datos están distribuidos. En streaming, la investigación no solo se centra en el diseño para tareas generales, como la agrupación, sino también en la búsqueda de nuevos enfoques que se adapten mejor a escenarios particulares. Como ejemplo, un mecanismo de agrupación ad-hoc resulta ser más adecuado para la defensa contra la denegación de servicio distribuida (Distributed Denial of Services, DDoS) que el problema tradicional de k-medias. En esta tesis se pretende contribuir en el problema agrupamiento en streaming tanto en entornos centralizados y distribuidos. Hemos diseñado un algoritmo centralizado de clustering mostrando las capacidades para descubrir agrupaciones de alta calidad en bajo tiempo frente a otras soluciones del estado del arte, en una amplia evaluación. Además, se ha trabajado sobre una estructura que reduce notablemente el espacio de memoria necesario, controlando, en todo momento, el error de los cómputos. Nuestro trabajo también proporciona dos protocolos de distribución del cómputo de agrupaciones. Se han analizado dos características fundamentales: el impacto sobre la calidad del clustering al realizar el cómputo distribuido y las condiciones necesarias para la reducción del tiempo de procesamiento frente a la solución centralizada. Finalmente, hemos desarrollado un entorno para la detección de ataques DDoS basado en agrupaciones. En este último caso, se ha caracterizado el tipo de ataques detectados y se ha desarrollado una evaluación sobre la eficiencia y eficacia de la mitigación del impacto del ataque. ABSTRACT Advances in hardware allow to collect huge volumes of data emerging applications that must provide information in near-real time, e.g., patient monitoring, health monitoring of water pipes, etc. The data streaming model emerges to comply with these applications overcoming the traditional store-then-process model. With the store-then-process model, data is stored before being consulted; while, in streaming, data are processed on the fly producing continuous responses. The challenges of streaming for processing data on the fly are the following: 1) responses must be produced continuously whenever new data arrives in the system; 2) data is accessed only once and is generally not maintained in its entirety, and 3) data processing time to produce a response should be low. Two models exist to compute continuous responses: the evolving model and the sliding window model; the latter fits best with applications must be computed over the most recently data rather than all the previous data. In recent years, research in the context of data stream mining has focused mainly on the evolving model. In the sliding window model, the work presented is smaller since these algorithms must be incremental and they must delete the information which expires when the window slides. Clustering is one of the fundamental techniques of data mining and is used to analyze data sets in order to find representative groups that provide a concise description of the data being processed. Clustering is critical in applications such as network intrusion detection or customer segmentation in marketing and advertising. Due to the huge amount of data that must be processed by such applications (up to millions of events per second), centralized solutions are usually unable to cope with timing restrictions and recur to shedding techniques where data is discarded during load peaks. To avoid discarding of data, processing of streams (such as clustering) must be distributed and adapted to environments where information is distributed. In streaming, research does not only focus on designing for general tasks, such as clustering, but also in finding new approaches that fit bests with particular scenarios. As an example, an ad-hoc grouping mechanism turns out to be more adequate than k-means for defense against Distributed Denial of Service (DDoS). This thesis contributes to the data stream mining clustering technique both for centralized and distributed environments. We present a centralized clustering algorithm showing capabilities to discover clusters of high quality in low time and we provide a comparison with existing state of the art solutions. We have worked on a data structure that significantly reduces memory requirements while controlling the error of the clusters statistics. We also provide two distributed clustering protocols. We focus on the analysis of two key features: the impact on the clustering quality when computation is distributed and the requirements for reducing the processing time compared to the centralized solution. Finally, with respect to ad-hoc grouping techniques, we have developed a DDoS detection framework based on clustering.We have characterized the attacks detected and we have evaluated the efficiency and effectiveness of mitigating the attack impact.
Resumo:
Autonomous systems require, in most of the cases, reasoning and decision-making capabilities. Moreover, the decision process has to occur in real time. Real-time computing means that every situation or event has to have an answer before a temporal deadline. In complex applications, these deadlines are usually in the order of milliseconds or even microseconds if the application is very demanding. In order to comply with these timing requirements, computing tasks have to be performed as fast as possible. The problem arises when computations are no longer simple, but very time-consuming operations. A good example can be found in autonomous navigation systems with visual-tracking submodules where Kalman filtering is the most extended solution. However, in recent years, some interesting new approaches have been developed. Particle filtering, given its more general problem-solving features, has reached an important position in the field. The aim of this thesis is to design, implement and validate a hardware platform that constitutes itself an embedded intelligent system. The proposed system would combine particle filtering and evolutionary computation algorithms to generate intelligent behavior. Traditional approaches to particle filtering or evolutionary computation have been developed in software platforms, including parallel capabilities to some extent. In this work, an additional goal is fully exploiting hardware implementation advantages. By using the computational resources available in a FPGA device, better performance results in terms of computation time are expected. These hardware resources will be in charge of extensive repetitive computations. With this hardware-based implementation, real-time features are also expected.
Resumo:
This paper analyses empirical evidence of efforts to enable Spanish micro and small manufacturing companies to boost their labour productivity rates through the development of the main pillars of their corporate social responsibility (CSR) policies. This study aims to develop new approaches and sensibilities towards work from an ethical, values (virtues) and CSR perspective, showing how internal dimensions of CSR, such those related to relationships with employees and responsibility in processes and product quality, can improve labour performance and labour efficiency, thereby contributing to a better society. The results of a sample of 929 small businesses indicate that the social responsibility policies that most contributed to a short-term increase in labour productivity are those related to internal aspects of the company, in particular its involvement in the quality of processes and products, promotion of innovation and employee care. However, the impact on labour productivity of CSR policies related to external factors, such as relationship with stakeholders and environmental concern, could not be empirically proven in this paper.