947 resultados para Negative stiffness structure, snap through, elastomers, hyperelastic model, root cause analysis
Resumo:
In questi ultimi anni il tema della sicurezza sismica degli edifici storici in muratura ha assunto particolare rilievo in quanto a partire soprattutto dall’ordinanza 3274 del 2003, emanata in seguito al sisma che colpì il Molise nel 2002, la normativa ha imposto un monitoraggio ed una classificazione degli edifici storici sotto tutela per quanto riguarda la vulnerabilità sismica (nel 2008, quest’anno, scade il termine per attuare quest’opera di classificazione). Si è posto per questo in modo più urgente il problema dello studio del comportamento degli edifici storici (non solo quelli che costituiscono monumento, ma anche e soprattutto quelli minori) e della loro sicurezza. Le Linee Guida di applicazione dell’Ordinanza 3274 nascono con l’intento di fornire strumenti e metodologie semplici ed efficaci per affrontare questo studio nei tempi previsti. Il problema si pone in modo particolare per le chiese, presenti in grande quantità sul territorio italiano e di cui costituiscono gran parte del patrimonio culturale; questi edifici, composti di solito da grandi elementi murari, non presentano comportamento scatolare, mancando orizzontamenti, elementi di collegamento efficace e muri di spina interni e sono particolarmente vulnerabili ad azioni sismiche; presentano inoltre un comportamento strutturale a sollecitazioni orizzontali che non può essere colto con un approccio globale basato, ad esempio, su un’analisi modale lineare: non ci sono modi di vibrare che coinvolgano una sufficiente parte di massa della struttura; si hanno valori dei coefficienti di partecipazione dei varii modi di vibrare minori del 10% (in generale molto più bassi). Per questo motivo l’esperienza e l’osservazione di casi reali suggeriscono un approccio di studio degli edifici storici sacri in muratura attraverso l’analisi della sicurezza sismica dei cosiddetti “macroelementi” in cui si può suddividere un edificio murario, i quali sono elementi che presentano un comportamento strutturale autonomo. Questo lavoro si inserisce in uno studio più ampio iniziato con una tesi di laurea dal titolo “Analisi Limite di Strutture in Muratura. Teoria e Applicazione all'Arco Trionfale” (M. Temprati), che ha studiato il comportamento dell’arco trionfale della chiesa collegiata di Santa Maria del Borgo a San Nicandro Garganico (FG). Suddividere un edificio in muratura in più elementi è il metodo proposto nelle Linee Guida, di cui si parla nel primo capitolo del presente lavoro: la vulnerabilità delle strutture può essere studiata tramite il moltiplicatore di collasso quale parametro in grado di esprimere il livello di sicurezza sismica. Nel secondo capitolo si illustra il calcolo degli indici di vulnerabilità e delle accelerazioni di danno per la chiesa di Santa Maria del Borgo, attraverso la compilazione delle schede dette “di II livello”, secondo quanto indicato nelle Linee Guida. Nel terzo capitolo viene riportato il calcolo del moltiplicatore di collasso a ribaltamento della facciata della chiesa. Su questo elemento si è incentrata l’attenzione nel presente lavoro. A causa della complessità dello schema strutturale della facciata connessa ad altri elementi dell’edificio, si è fatto uso del codice di calcolo agli elementi finiti ABAQUS. Della modellazione del materiale e del settaggio dei parametri del software si è discusso nel quarto capitolo. Nel quinto capitolo si illustra l’analisi condotta tramite ABAQUS sullo stesso schema della facciata utilizzato per il calcolo manuale nel capitolo tre: l’utilizzo combinato dell’analisi cinematica e del metodo agli elementi finiti permette per esempi semplici di convalidare i risultati ottenibili con un’analisi non-lineare agli elementi finiti e di estenderne la validità a schemi più completi e più complessi. Nel sesto capitolo infatti si riportano i risultati delle analisi condotte con ABAQUS su schemi strutturali in cui si considerano anche gli elementi connessi alla facciata. Si riesce in questo modo ad individuare con chiarezza il meccanismo di collasso di più facile attivazione per la facciata e a trarre importanti informazioni sul comportamento strutturale delle varie parti, anche in vista di un intervento di ristrutturazione e miglioramento sismico.
Resumo:
I Max Bill is an intense giornata of a big fresco. An analysis of the main social, artistic and cultural events throughout the twentieth century is needed in order to trace his career through his masterpieces and architectures. Some of the faces of this hypothetical mural painting are, among others, Le Corbusier, Walter Gropius, Ernesto Nathan Rogers, Kandinskij, Klee, Mondrian, Vatongerloo, Ignazio Silone, while the backcloth is given by artistic avant-gardes, Bauhaus, International Exhibitions, CIAM, war events, reconstruction, Milan Triennali, Venice Biennali, the School of Ulm. Architect, even though more known as painter, sculptor, designer and graphic artist, Max Bill attends the Bauhaus as a student in the years 1927-1929, and from this experience derives the main features of a rational, objective, constructive and non figurative art. His research is devoted to give his art a scientific methodology: each work proceeds from the analysis of a problem to the logical and always verifiable solution of the same problem. By means of composition elements (such as rhythm, seriality, theme and its variation, harmony and dissonance), he faces, with consistent results, themes apparently very distant from each other as the project for the H.f.G. or the design for a font. Mathematics are a constant reference frame as field of certainties, order, objectivity: ‘for Bill mathematics are never confined to a simple function: they represent a climate of spiritual certainties, and also the theme of non attempted in its purest state, objectivity of the sign and of the geometrical place, and at the same time restlessness of the infinity: Limited and Unlimited ’. In almost sixty years of activity, experiencing all artistic fields, Max Bill works, projects, designs, holds conferences and exhibitions in Europe, Asia and Americas, confronting himself with the most influencing personalities of the twentieth century. In such a vast scenery, the need to limit the investigation field combined with the necessity to address and analyse the unpublished and original aspect of Bill’s relations with Italy. The original contribution of the present research regards this particular ‘geographic delimitation’; in particular, beyond the deep cultural exchanges between Bill and a series of Milanese architects, most of all with Rogers, two main projects have been addressed: the realtà nuova at Milan Triennale in 1947, and the Contemporary Art Museum in Florence in 1980. It is important to note that these projects have not been previously investigated, and the former never appears in the sources either. These works, together with the most well-known ones, such as the projects for the VI and IX Triennale, and the Swiss pavilion for the Biennale, add important details to the reference frame of the relations which took place between Zurich and Milan. Most of the occasions for exchanges took part in between the Thirties and the Fifties, years during which Bill underwent a significant period of artistic growth. He meets the Swiss progressive architects and the Paris artists from the Abstraction-Création movement, enters the CIAM, collaborates with Le Corbusier to the third volume of his Complete Works, and in Milan he works and gets confronted with the events related to post-war reconstruction. In these years Bill defines his own working methodology, attaining an artistic maturity in his work. The present research investigates the mentioned time period, despite some necessary exceptions. II The official Max Bill bibliography is naturally wide, including spreading works along with ones more devoted to analytical investigation, mainly written in German and often translated into French and English (Max Bill himself published his works in three languages). Few works have been published in Italian and, excluding the catalogue of the Parma exhibition from 1977, they cannot be considered comprehensive. Many publications are exhibition catalogues, some of which include essays written by Max Bill himself, some others bring Bill’s comments in a educational-pedagogical approach, to accompany the observer towards a full understanding of the composition processes of his art works. Bill also left a great amount of theoretical speculations to encourage a critical reading of his works in the form of books edited or written by him, and essays published in ‘Werk’, magazine of the Swiss Werkbund, and other international reviews, among which Domus and Casabella. These three reviews have been important tools of analysis, since they include tracks of some of Max Bill’s architectural works. The architectural aspect is less investigated than the plastic and pictorial ones in all the main reference manuals on the subject: Benevolo, Tafuri and Dal Co, Frampton, Allenspach consider Max Bill as an artist proceeding in his work from Bauhaus in the Ulm experience . A first filing of his works was published in 2004 in the monographic issue of the Spanish magazine 2G, together with critical essays by Karin Gimmi, Stanislaus von Moos, Arthur Rüegg and Hans Frei, and in ‘Konkrete Architektur?’, again by Hans Frei. Moreover, the monographic essay on the Atelier Haus building by Arthur Rüegg from 1997, and the DPA 17 issue of the Catalonia Polytechnic with contributions of Carlos Martì, Bruno Reichlin and Ton Salvadò, the latter publication concentrating on a few Bill’s themes and architectures. An urge to studying and going in depth in Max Bill’s works was marked in 2008 by the centenary of his birth and by a recent rediscovery of Bill as initiator of the ‘minimalist’ tradition in Swiss architecture. Bill’s heirs are both very active in promoting exhibitions, researching and publishing. Jakob Bill, Max Bill’s son and painter himself, recently published a work on Bill’s experience in Bauhaus, and earlier on he had published an in-depth study on ‘Endless Ribbons’ sculptures. Angela Thomas Schmid, Bill’s wife and art historian, published in end 2008 the first volume of a biography on Max Bill and, together with the film maker Eric Schmid, produced a documentary film which was also presented at the last Locarno Film Festival. Both biography and documentary concentrate on Max Bill’s political involvement, from antifascism and 1968 protest movements to Bill experiences as Zurich Municipality councilman and member of the Swiss Confederation Parliament. In the present research, the bibliography includes also direct sources, such as interviews and original materials in the form of letters correspondence and graphic works together with related essays, kept in the max+binia+jakob bill stiftung archive in Zurich. III The results of the present research are organized into four main chapters, each of them subdivided into four parts. The first chapter concentrates on the research field, reasons, tools and methodologies employed, whereas the second one consists of a short biographical note organized by topics, introducing the subject of the research. The third chapter, which includes unpublished events, traces the historical and cultural frame with particular reference to the relations between Max Bill and the Italian scene, especially Milan and the architects Rogers and Baldessari around the Fifties, searching the themes and the keys for interpretation of Bill’s architectures and investigating the critical debate on the reviews and the plastic survey through sculpture. The fourth and last chapter examines four main architectures chosen on a geographical basis, all devoted to exhibition spaces, investigating Max Bill’s composition process related to the pictorial field. Paintings has surely been easier and faster to investigate and verify than the building field. A doctoral thesis discussed in Lausanne in 1977 investigating Max Bill’s plastic and pictorial works, provided a series of devices which were corrected and adapted for the definition of the interpretation grid for the composition structures of Bill’s main architectures. Four different tools are employed in the investigation of each work: a context analysis related to chapter three results; a specific theoretical essay by Max Bill briefly explaining his main theses, even though not directly linked to the very same work of art considered; the interpretation grid for the composition themes derived from a related pictorial work; the architecture drawing and digital three-dimensional model. The double analysis of the architectural and pictorial fields is functional to underlining the relation among the different elements of the composition process; the two fields, however, cannot be compared and they stay, in Max Bill’s works as in the present research, interdependent though self-sufficient. IV An important aspect of Max Bill production is self-referentiality: talking of Max Bill, also through Max Bill, as a need for coherence instead of a method limitation. Ernesto Nathan Rogers describes Bill as the last humanist, and his horizon is the known world but, as the ‘Concrete Art’ of which he is one of the main representatives, his production justifies itself: Max Bill not only found a method, but he autonomously re-wrote the ‘rules of the game’, derived timeless theoretical principles and verified them through a rich and interdisciplinary artistic production. The most recurrent words in the present research work are synthesis, unity, space and logic. These terms are part of Max Bill’s vocabulary and can be referred to his works. Similarly, graphic settings or analytical schemes in this research text referring to or commenting Bill’s architectural projects were drawn up keeping in mind the concise precision of his architectural design. As for Mies van der Rohe, it has been written that Max Bill took art to ‘zero degree’ reaching in this way a high complexity. His works are a synthesis of art: they conceptually encompass all previous and –considered their developments- most of contemporary pictures. Contents and message are generally explicitly declared in the title or in Bill’s essays on his artistic works and architectural projects: the beneficiary is invited to go through and re-build the process of synthesis generating the shape. In the course of the interview with the Milan artist Getulio Alviani, he tells how he would not write more than a page for an essay on Josef Albers: everything was already evident ‘on the surface’ and any additional sentence would be redundant. Two years after that interview, these pages attempt to decompose and single out the elements and processes connected with some of Max Bill’s works which, for their own origin, already contain all possible explanations and interpretations. The formal reduction in favour of contents maximization is, perhaps, Max Bill’s main lesson.
Resumo:
There is a widening consensus around the fact that, in many developed countries, food production-consumption patterns are in recent years interested by a process of deep change towards diversification and re-localisation practices, as a counter-tendency to the trend to the increasing disconnection between farming and food, producers and consumers. The relevance of these initiatives doesn't certainly lie on their economic dimension, but rather in their intense diffusion and growth rate, their spontaneous and autonomous nature and, especially, their intrinsic innovative potential. These dynamics involve a wide range of actors around local food patterns, embedding short food supply chains initiatives within a more complex and wider process of rural development, based on principles of sustainability, multifunctionality and valorisation of endogenous resources. In this work we have been analysing these features through a multi-level perspective, with reference to the dynamics between niche and regime and the inherent characteristics of the innovation paths. We apply this approach, through a qualitative methodology, to the analysis of the experience of farmers’ markets and Solidarity-Based Consumers Groups (Gruppi di Acquisto Solidale) ongoing in Tuscany, seeking to highlight the dynamics that are affecting the establishment of this alternative food production-consumption model (and its related innovative potential) from within and from without. To verify if and in which conditions they can constitute a niche, a protected space where radical innovations can develop, we make reference to the three interrelated analytic dimensions of socio-technical systems: the actors (i.e. individuals. social groups, organisations), the rules and institutions system, and the artefacts (i.e. the material and immaterial contexts in which the actors move). Through it, we analyse the innovative potential of niches and the level of their structuration and , then, the mechanisms of system transition, focusing on the new dynamics within the niche and between the niche and the policy regime emerging after the growth of interest by mass-media and public institutions and their direct involvement in the initiatives. Following the development of these significant experiences, we explore more deeply social, economic, cultural, political and organisational factors affecting innovations in face-to-face interactions, underpinning the critical aspects (sharing of alternative values, coherence at individual choices level, frictions on organisational aspects, inclusion/exclusion, attitudes towards integration at territorial level), towards uncovering until to the emergence of tensions and the risks of opportunistic behaviours that might arise from their growth. Finally, a comparison with similar experiences abroad is drawn (specifically with Provence), in order to detect food for thought, potentially useful for leading regional initiativestowards transition path.
Resumo:
In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.
Resumo:
Somatostatin ist ein Molekül mit multifunktinonellem Charakter, dem Neurotransmitter-, Neuromodulator- und (Neuro)-Hormoneigenschaften zugeschrieben werden. Gemäß seiner ubiquitären Verteilung in Geweben beeinflusst es Stoffwechsel- und Entwicklungsprozesse, bis hin zu Lern-und Gedächtnisleistungen. Diese Wirkungen resultieren aus dem lokalen und zeitlichen Zusammenspiel eines Liganden und fünf G-Protein gekoppelter Rezeptoren (SSTR1-5). Zur Charakterisierung der biologischen Bedeutung des Somatostatin-Systems im Gesamtorganismus wurde eine Mutationsanalyse einzelner Systemkomponenten durchgeführt. Sie umfaßte die Inaktivierung der Gene für das Somatostatin-Präpropeptid und die der Rezeptoren SSTR3 und SSTR4 durch Gene Targeting. Die entsprechenden Ausfallmutationen belegen: Weder die Rezeptoren 3 und 4, noch Somatostatin sind für das Überleben des Organismus unter Standardhaltungsbedingungen notwendig. Die entsprechenden Mauslinien zeigen keine unmittelbar auffälligen Einschränkungen ihrer Biologie. Die Somatostatin-Nullmaus wurde zum Hauptgegenstand einer detaillierten Untersuchung aufgrund der übergeordneten Position des Liganden in der Signalkaskade und verfügbaren Hinweisen zu seiner Funktion. Folgende Schlußfolgerungen konnten nach eingehender Analyse gezogen werden: Der Ausfall des Somatostatin-Gens hat erhöhte Plasmakonzentrationen an Wachstumshormon (GH) zur Konsequenz. Dies steht im Einklang mit der Rolle Somatostatins als hemmender Faktor der Wachstumshormon-Freisetzung, die in der Mutante aufgehoben ist. Durch die Somatostatin-Nullmaus wurde zudem deutlich: Somatostatin interagiert als wesentliches Bindeglied zwischen der Wachstums- und Streßachse. Permanent erhöhte Corticosteron-Werte in den Mutanten implizieren einen negativen tonischen Einfluß für die Sekretion von Glukocorticoiden in vivo. Damit zeigt die Knockout-Maus, daß Somatostatin normalerweise als ein entscheidendes inhibierendes Kontrollelement der Steroidfreisetzung fungiert. Verhaltensversuche offenbarten ein Defizit im motorischen Lernen. Somatostatin-Nullmäuse bleiben im Lernparadigma “Rotierender Stabtest” hinter ihren Artgenossen zurück ohne aber generell in Motorik oder Koordination eingeschränkt zu sein. Diese motorischen Lernvorgänge sind von einem funktionierenden Kleinhirn abhängig. Da Somatostatin und seine Rezeptoren kaum im adulten, wohl aber im sich entwickelnden Kleinhirn auftreten, belegt dieses Ergebnis die Funktion transient in der Entwicklung exprimierter Neuropeptide – eine lang bestehende, aber bislang experimentell nicht nachgewiesene Hypothese. Die Überprüfung weiterer physiologischer Parameter und Verhaltenskategorien unter Standard-Laborbedingunggen ergab keine sichtbaren Abweichungen im Vergleich zu Wildtyp-Mäusen. Damit steht nun ein Tiermodell zur weiterführenden Analyse für die Somatostatin-Forschung bereit: In endokrinologischen, elektrophysiologischen und verhaltens-biologischen Experimenten ist nun eine unmittelbare Korrelation selektiv mit dem Somatostatin-Peptid bzw. mit den Rezeptoren 3 und 4 aber auch in Kombination der Ausfallmutationen nach entsprechenden Kreuzungen möglich.
Resumo:
In this research work I analyzed the instrumental seismicity of Southern Italy in the area including the Lucanian Apennines and Bradano foredeep, making use of the most recent seismological database available so far. I examined the seismicity occurred during the period between 2001 and 2006, considering 514 events with magnitudes M ≥ 2.0. In the first part of the work, P- and S-wave arrival times, recorded by the Italian National Seismic Network (RSNC) operated by the Istituto Nazionale di Geofisica e Vulcanologia (INGV), were re-picked along with those of the SAPTEX temporary array (2001–2004). For some events located in the Upper Val d'Agri, I also used data from the Eni-Agip oil company seismic network. I computed the VP/VS ratio obtaining a value of 1.83 and I carried out an analysis for the one-dimensional (1D) velocity model that approximates the seismic structure of the study area. After this preliminary analysis, making use of the records obtained in the SeSCAL experiment, I incremented the database by handpicking new arrival times. My final dataset consists of 15,666 P- and 9228 S-arrival times associated to 1047 earthquakes with magnitude ML ≥ 1.5. I computed 162 fault-plane solutions and composite focal mechanisms for closely located events. I investigated stress field orientation inverting focal mechanism belonging to the Lucanian Apennine and the Pollino Range, both areas characterized by more concentrated background seismicity. Moreover, I applied the double difference technique (DD) to improve the earthquake locations. Considering these results and different datasets available in the literature, I carried out a detailed analysis of single sub-areas and of a swarm (November 2008) recorded by SeSCAL array. The relocated seismicity appears more concentrated within the upper crust and it is mostly clustered along the Lucanian Apennine chain. In particular, two well-defined clusters were located in the Potentino and in the Abriola-Pietrapertosa sector (central Lucanian region). Their hypocentral depths are slightly deeper than those observed beneath the chain. I suggest that these two seismic features are representative of the transition from the inner portion of the chain with NE-SW extension to the external margin characterized by dextral strike-slip kinematics. In the easternmost part of the study area, below the Bradano foredeep and the Apulia foreland, the seismicity is generally deeper and more scattered and is associated to the Murge uplift and to the small structures present in the area. I also observed a small structure NE-SW oriented in the Abriola-Pietrapertosa area (activated with a swarm in November 2008) that could be considered to act as a barrier to the propagation of a potential rupture of an active NW-SE striking faults system. Focal mechanisms computed in this study are in large part normal and strike-slip solutions and their tensional axes (T-axes) have a generalized NE-SW orientation. Thanks to denser coverage of seismic stations and the detailed analysis, this study is a further contribution to the comprehension of the seismogenesis and state of stress of the Southern Apennines region, giving important contributions to seismotectonic zoning and seismic hazard assessment.
Resumo:
This research activity studied how the uncertainties are concerned and interrelated through the multi-model approach, since it seems to be the bigger challenge of ocean and weather forecasting. Moreover, we tried to reduce model error throughout the superensemble approach. In order to provide this aim, we created different dataset and by means of proper algorithms we obtained the superensamble estimate. We studied the sensitivity of this algorithm in function of its characteristics parameters. Clearly, it is not possible to evaluate a reasonable estimation of the error neglecting the importance of the grid size of ocean model, for the large amount of all the sub grid-phenomena embedded in space discretizations that can be only roughly parametrized instead of an explicit evaluation. For this reason we also developed a high resolution model, in order to calculate for the first time the impact of grid resolution on model error.
Resumo:
Abstract (English) Cities nowadays face complex challenges to meet objectives regarding socio-economic development and quality of life. The concept of "smart city" is a response to these challenges. Although common practices are being developed all over the world, different priorities are defined and different architectures are followed. In this master thesis I focuses on the applied architecture of Riverside's case study, through a progression model that underline the main steps that moves the city from a situation of crisis, to be appointed "Intelligent Community" of the 2012 by Intelligent Community Forum. I discuss the problem of integration among the physical, institutional and digital dimension of smart cities and the "bridges" that connect these three spatialities. Riverside's progression model takes as a reference a comprehensive framework made unifying the keys component of the three most quoted framework in this field: a technology-oriented vision (strongly promoted by IBM [Dirks et al. 2009]), an approach-oriented one [Schaffers et al. 2011] that is sponsored by many initiatives within the European Commission, and a purely service-oriented one [Giffinger et al. 2007][Toppeta, 2010].
Resumo:
Die verschiedenen Lichtsammelproteine (Lhc-Proteine) höherer Pflanzen unterscheiden sich im Oligomerisierungsverhalten. Im Photosystem II existieren 6 Lhc-Proteine, die entweder die monomeren Lichtsammelkomplexe (LHC) CP24 (Lhcb6), CP26 (Lhcb5) und CP29 (Lhcb4) oder den trimeren LHCII (Lhcb1, Lhcb2 und Lhcb3) bilden. Im Photosystem I sind laut Kristallstruktur vier Lhc-Proteine lokalisiert, die als Heterodimere organisiert vorliegen. Der schwerpunktmäßig untersuchte LHCI-730 setzt sich aus Lhca1 und Lhca4 zusammen, während der LHCI-680 aus Lhca2 und Lhca3 besteht. Das Ziel der Arbeit bestand in der Identifizierung der für das unterschiedliche Oligomerisierungsverhalten verantwortlichen Proteinbereiche und Aminosäuren. Die für diese Arbeit generierten Consensussequenzalignments verschiedener Lhca- und Lhcb-Proteine vieler Arten unterstützen die Folgerungen aus Strukturdaten und anderen Sequenzalignments, dass den LHCs eine gemeinsame Monomerstruktur zu Grunde liegt. Die Helices 1 und 3 weisen weitgehend sehr hohe Sequenzidentitäten auf, während die N- und C-Termini, die zwei Schleifenregionen und die Helix 2 nur schwach konserviert sind. Falls die Bereiche mit hoher Sequenzübereinstimmung für das Zustandekommen ähnlicher monomerer LHC-Strukturen verantwortlich sind, könnten in den schwach konservierten Domänen die Ursachen für das unterschiedliche Oligomerisierungsverhalten lokalisiert sein. Aufgrund dessen wurden die schwach konservierten Domänen des monomerisierenden Lhcb4, des mit dem Lhca1 dimerisierenden Lhca4 und des Trimere bildenden Lhcb1 gegen die entsprechenden Domänen der anderen Proteine ausgetauscht und bezüglich ihres Oligomerisierungsverhaltens untersucht. Im Lhca4 konnten mit der Helix 2 und der stromalen Schleife zwei für eine Heterodimerisierung essentielle Domänen gefunden werden. Im Lhcb1 waren neben dem N-Terminus auch die 2. Helix und die stromale Schleifendomäne unentbehrlich für eine Trimerisierung. Zusätzlich waren Dimerisierung und Trimerisierung bei Austausch der luminalen Schleife beeinträchtigt. Ein geringer Beitrag zur Lhcb1-Trimerisierung konnte auch für den C-Terminus belegt werden. Ein zusätzliches Ziel der Arbeit sollte der Transfer der Oligomerisierungseigenschaften durch umfangreichen Domänentausch von einem auf ein anderes Protein sein. Der Transfer der Fähigkeit zur Dimerbildung durch Substitution gegen essentielle Lhca4-Domänen (50% luminale Schleife, 100% Helix 2 und 100% stromale Schleife) gelang beim Lhcb4, nicht aber beim Lhcb1. Der Transfer der Trimerisierungsfähigkeit auf Lhca4 und Lhcb4 scheiterte. Eine Lhca1-Mutante mit allen für eine Dimerisierung essentiellen Lhca4-Domänen, die durch Interaktion einzelner Moleküle untereinander multimere LHCs bilden sollte, war bereits in ihrer Monomerbildung beeinträchtigt. Eine Übertragung der Oligomerisierungsfähigkeit auf andere Proteine durch massiven Domänentransfer gestaltete sich somit schwierig, da vermutlich im mutierten Protein immer noch ursprüngliche Tertiärstrukturanteile enthalten waren, die nicht mit den transferierten Proteinbestandteilen kompatibel sind. Bei zukünftigen Experimenten zur Klärung der Transferierbarkeit der Oligomerisierungseigenschaft sollten deswegen neben dem unberücksichtigten 1. Teil der luminalen Schleife auch wenig konservierte Aminosäuren in der 1. und 3. Helix Beachtung finden. Ein weiteres Ziel dieser Arbeit war es, die LHCI-730-Dimerisierung im Detail zu untersuchen. Mutationsanalysen bestätigten den von früheren Untersuchungen bekannten Einfluss des Isoleucins 103 und Histidins 99. Letzteres geht möglicherweise durch sein gebundenes Chlorophyll eine Interaktion mit dem Lhca1 ein. Das Phenylalanin 95 stellte sich ebenfalls als ein wichtiger Interaktionspartner heraus und könnte in Wechselwirkung mit einem zwischen Lhca1 und Lhca4 lokalisierten Phosphatidylglycerin treten. Das ebenfalls an der Dimerbildung beteiligte Serin 88 des Lhca4 könnte auf Grund der räumlichen Nähe bei Modellierungen direkt mit dem am C-Terminus des Lhca1 lokalisierten Glycin 190 interagieren. Darüber hinaus wurde ein in der luminalen Lhca4-Schleife lokalisiertes Phenylalanin 84 als Interaktionspartner des Tryptophans 185 im C-Terminus von Lhca1 identifiziert. Der simultane Austausch des Isoleucins 109 und Lysins 110 in der stromalen Schleife des Lhca4, konnte deren Einfluss auf die Dimerisierung belegen. Nachdem bislang an der Dimerbildung beteiligte Aminosäuren am N- und C-Terminus des Lhca1 und Lhca4 identifiziert werden konnten, wurden in dieser Arbeit viele an einer Dimerbildung beteiligten Proteinbereiche und Aminosäuren in der Helix 2 und den Schleifenregionen des Lhca4 identifiziert. Um alle an der Lhca1-Lhca4-Interaktion beteiligten Aminosäuren aufzuklären, müssten durch Mutationsanalysen die in der stromalen Lhca4-Schleife vermuteten Interaktionspartner des für die Dimerisierung wichtigen Tryptophans 4 am N-Terminus von Lhca1 identifiziert, und die in der Helix 3 des Lhca1 vermuteten Interaktionspartner der Helix 2 des Lhca4 ermittelt werden.
Resumo:
Membranen spielen eine essentielle Rolle bei vielen wichtigen zellulären Prozessen. Sie ermöglichen die Erzeugung von chemischen Gradienten zwischen dem Zellinneren und der Umgebung. Die Zellmembran übernimmt wesentliche Aufgaben bei der intra- und extrazellulären Signalweiterleitung und der Adhäsion an Oberflächen. Durch Prozesse wie Endozytose und Exozytose werden Stoffe in oder aus der Zelle transportiert, eingehüllt in Vesikel, welche aus der Zellmembran geformt werden. Zusätzlich bietet sie auch Schutz für das Zellinnere. Der Hauptbestandteil einer Zellmembran ist die Lipiddoppelschicht, eine zweidimensionale fluide Matrix mit einer heterogenen Zusammensetzung aus unterschiedlichen Lipiden. In dieser Matrix befinden sich weitere Bausteine, wie z.B. Proteine. An der Innenseite der Zelle ist die Membran über Ankerproteine an das Zytoskelett gekoppelt. Dieses Polymernetzwerk erhöht unter anderem die Stabilität, beeinflusst die Form der Zelle und übernimmt Funktionenrnbei der Zellbewegung. Zellmembranen sind keine homogenen Strukturen, je nach Funktion sind unterschiedliche Lipide und Proteine in mikrsokopischen Domänen angereichert.Um die grundlegenden mechanischen Eigenschaften der Zellmembran zu verstehen wurde im Rahmen dieser Arbeit das Modellsystem der porenüberspannenden Membranen verwendet.Die Entwicklung der porenüberspannenden Membranen ermöglicht die Untersuchung von mechanischen Eigenschaften von Membranen im mikro- bis nanoskopischen Bereich mit rasterkraftmikroskopischen Methoden. Hierbei bestimmen Porosität und Porengröße des Substrates die räumliche Auflösung, mit welcher die mechanischen Parameter untersucht werdenrnkönnen. Porenüberspannende Lipiddoppelschichten und Zellmembranen auf neuartigen porösen Siliziumsubstraten mit Porenradien von 225 nm bis 600 nm und Porositäten bis zu 30% wurden untersucht. Es wird ein Weg zu einer umfassenden theoretischen Modellierung der lokalen Indentationsexperimente und der Bestimmung der dominierenden energetischen Beiträge in der Mechanik von porenüberspannenden Membranen aufgezeigt. Porenüberspannende Membranen zeigen eine linear ansteigende Kraft mit zunehmender Indentationstiefe. Durch Untersuchung verschiedener Oberflächen, Porengrößen und Membranen unterschiedlicher Zusammensetzung war es für freistehende Lipiddoppelschichten möglich, den Einfluss der Oberflächeneigenschaften und Geometrie des Substrates, sowie der Membranphase und des Lösungsmittels auf die mechanischen Eigenschaften zu bestimmen. Es ist möglich, die experimentellen Daten mit einem theoretischen Modell zu beschreiben. Hierbei werden Parameter wie die laterale Spannung und das Biegemodul der Membran bestimmt. In Abhängigkeit der Substrateigenschaften wurden für freitragende Lipiddoppelschichten laterale Spannungen von 150 μN/m bis zu 31 mN/m gefunden für Biegemodulde zwischen 10^(−19) J bis 10^(−18) J. Durch Kraft-Indentations-Experimente an porenüberspannenden Zellmembranen wurde ein Vergleich zwischen dem Modell der freistehenden Lipiddoppelschichten und nativen Membranen herbeigeführt. Die lateralen Spannungen für native freitragende Membranen wurden zu 50 μN/m bestimmt. Weiterhin konnte der Einfluss des Zytoskeletts und der extrazellulä-rnren Matrix auf die mechanischen Eigenschaften bestimmt und innerhalb eines basolateralen Zellmembranfragments kartiert werden, wobei die Periodizität und der Porendurchmesser des Substrates das räumliche Auflösungsvermögen bestimmen. Durch Fixierung der freistehenden Zellmembran wurde das Biegemodul der Membran um bis zu einem Faktor 10 erhöht. Diese Arbeit zeigt wie lokal aufgelöste, mechanische Eigenschaften mittels des Modellsystems der porenüberspannenden Membranen gemessen und quantifiziert werden können. Weiterhin werden die dominierenden energetischen Einflüsse diskutiert, und eine Vergleichbarkeit zurnnatürlichen Membranen hergestellt.rn
Resumo:
The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.
Resumo:
Untersucht werden Prozess-Ergebnis-Zusammenhänge einer kognitiv-verhaltenstherapeutischen Gruppentherapie für Diabetes und Depression im Rahmen der DAD-Studie. rnAufgrund des Mangels an geeigneten Erhebungsinstrumenten der validen, ökonomischen und komplementären Sitzungsbewertung von Gruppenpatienten und -therapeuten wurden angelehnt an einen Patienten- (GTS-P) zwei Therapeutenstundenbögen entwickelt: der GTS-T zur Bewertung der Gesamtgruppe und der GTS-TP zur Bewertung einzelner Patienten. Die GTS-Bögen zeigen bei der Überprüfung der Testgüte insgesamt gute Itemparameter und Reliabilitäten. Das in den exploratorischen Faktorenanaylsen des GTS-P identifizierte zweifaktorielle Modell (1. wahrgenommene Zuversicht hinsichtlich der Gruppentherapie, 2. wahrgenommene persönliche Beteiligung) kann in den konfirmatorischen Faktorenanalysen bestätigt werden. Dazu wurden GTS-P-Daten aus einer Untersuchung mit Patienten mit somatoformen Störungen (Schulte, 2001) einbezogen. Den Ergebnissen der Item- und Faktorenanalysen folgend, wurden zwei Items des GTS-P und zwei weitere Items des GTS-T aus den Instrumenten ausgeschlossen. Für den GTS-T zeigt sich eine einfaktorielle, für den GTS-TP eine zum GTS-P parallele zweifaktorielle Struktur. rnIn den Mehrebenenanalysen zur Vorhersage des Therapieergebnisses (Post-Depressionssymptomatik) zeigt sich die Skala Zuversicht des GTS-P zu Therapiebeginn (1.-4. Sitzung) kontrolliert an der Skala Beteiligung und der Prä-Symptomatik, als valider Prädiktor. Das Item 5 „Anregungen“ (Skala Zuversicht) und Item 2 „Aktive Mitwirkung“ (Skala Beteiligung) sind am stärksten an diesem Effekt beteiligt, da diese Itemkombination das Therapieergebnis ebenfalls valide vorhersagen kann. Die Prognose ist schon durch die Werte der ersten Gruppentherapiesitzungen in der Remoralisierungsphase (Howard et al., 1993) möglich und verbessert sich nicht bei Berücksichtigung aller 10 Gruppensitzungen. Die Therapeutenbögen zeigen keine prädiktive Validität. Bedeutsame Zusammenhänge der Patienten- und Therapeutenbewertungen finden sich lediglich für den GTS-P und GTS-TP. Weitere Prädiktoren, wie der Diabetestyp, Diabeteskomplikationen und die Adhärenz, konnten nicht zur Verbesserung der Vorhersage beitragen. Für sekundär überprüfte Kriterien gelang die Prognose lediglich für ein weiteres Maß der Depressionssymptomatik und für eine Gesamtbewertung der Gruppentherapie durch die Patienten zu Therapieende. Bei der deskriptiven Betrachtung der Prozessqualität der DAD-Gruppentherapien zeigen sich positive, über den Verlauf der Gruppe zunehmende und nach Therapiephasen differenzierbare Bewertungsverläufe. rnDie Ergebnisse der Studie sprechen für die Relevanz von unspezifischen Wirkfaktoren für das Therapieergebnis von kognitiv-behavioralen Gruppentherapien. Die von den Gruppenpatienten wahrgenommene Zuversicht und Beteiligung als Zeichen der Ansprechbarkeit auf die Therapie sollte mit Hilfe von Stundenbögen, wie den GTS-Bögen, von Gruppentherapeuten zur Prozessoptimierung und Prävention von Therapieabbrüchen und Misserfolgen beachtet werden. rn
Resumo:
Die vorliegende Arbeit behandelt den fluid-kristallinen Phasenübergang sowie den Glasübergang anhand von kolloidalen Hart-Kugel(HK)-Modellsystemen. Die Untersuchungen erfolgen dabei im Wesentlichen mit unterschiedlichen Lichtstreumethoden und daher im reziproken Raum. rnDie Analyse der Kristallisationskinetik zeigt, dass es bei der Kristallisation zu signifikanten Abweichungen vom Bild der klassischen Nukleationstheorie (CNT) kommt. Diese geht von einem einstufigen Nukleationsprozess aus, wohingegen bei den hier durchgeführten Experimenten ein mehrstufiger Prozess beobachtet wird. Vor der eigentlichen Kristallisation kommt es zunächst zur Nukleation einer metastabilen Zwischenphase, sogenannter Precursor. In einer zweiten Stufe erfolgt innerhalb der Precursor die eigentliche Nukleation der Kristallite. rnDurch weitere Analyse und den Vergleich des Kristallisations- und Verglasungsszenarios konnte das Konzept der Precursornukleation auf den Vorgang der Verglasung erweitert werden. Während die Kristallnukleation oberhalb des Glasübergangspunktes zum Erliegen kommt, bleibt der Prozess der Precursornukleation auch bei verglasenden Proben erhalten. Ein Glas erstarrt somit in einem amorphen Zustand mit lokalen Precursorstrukturen. Die Korrelation der gemessenen zeitlichen Entwicklung der strukturellen sowie der dynamischen Eigenschaften zeigt darüber hinaus, dass das bisher unverstandene Ageing-Phänomen von HK-Gläsern mit der Nukleation von Precursorn zusammenhängt.rnEin solches mehrstufiges Szenario wurde bereits in früheren Veröffentlichungen beobachtet. Die im Rahmen dieser Arbeit durchgeführten Messungen ermöglichten erstmals die Bestimmung von Kristallnukleationsratendichten (Kristall-NRD) und Ratendichten für die Precursornukleation bis über den Glasübergangspunkt hinaus. Die Kristall-NRD bestätigen die Resultate aus anderen experimentellen Arbeiten. Die weiteren Analysen der Kristall-NRD belegen, dass die fluid-kristalline Grenzflächenspannung bei der Nukleation entgegen den Annahmen der CNT nicht konstant ist, sondern mit ansteigendem Volumenbruch linear zunimmt. Die Erweiterung der CNT um eine linear zunehmende Grenzflächenspannung ermöglichte eine quantitative Beschreibung der gemessenen Kristall- sowie der Precursor-NRD, was den Schluss zulässt, dass es sich in beiden Fällen um einen Boltzmann-aktivierten Prozess handelt. rnUm die beobachteten Abweichungen des Nukleationsprozesses vom Bild der CNT näher zu untersuchen, wurden die kollektiven Partikeldynamiken in stabilen Fluiden und metastabilen Schmelzen analysiert. Im klassischen Bild wird angenommen, dass die kollektive Partikeldynamik beim Vorgang der Nukleation keine Rolle spielt. Anhand der Resultate zeigen sich Abweichungen in der Dynamik stabiler Fluide und metastabiler Schmelzen. Während die kollektive Partikeldynamik in der stabilen Schmelze von der Struktur entkoppelt ist, tritt oberhalb des Phasenübergangspunktes eine Kopplung von Struktur und Dynamik auf. Dabei treten die Abweichungen zunächst in der Umgebung des ersten Strukturfaktormaximums und somit bei den am stärksten besetzten Moden auf. Mit steigender Unterkühlung nehmen die Anzahl der abweichenden Moden sowie die Stärke der Abweichungen zu. Dieses Phänomen könnte einen signifikanten Einfluss auf den Nukleationsprozess und somit auf die Kristallisationskinetik haben. Die Analyse der Dynamik im stabilen Fluid zeigt darüber hinaus Hinweise auf eine Singularität bei Annäherung an den fluid-kristallinen Phasenübergangspunkt.rnDes Weiteren wurden im Rahmen der vorliegenden Arbeit erstmals Ratendichten für die heterogene Nukleation eines HK-Systems an einer flachen Wand mittels statischer Lichtstreuung (SLS) bestimmt. Die Ergebnisse der Messung zeigen, dass die Nukleationsbarriere der heterogenen Nukleation annähernd Null ist und folglich eine vollständige Benetzung der Wand mit einer kristallinen Monolage vorliegt. Die Erweiterung der Untersuchungen auf gekrümmte Oberflächen in Form von sphärischen Partikeln (Seeds) stellt die erste experimentelle Arbeit dar, die den Einfluss eines Ensembles von Seeds auf die Kristallisationskinetik in HK-Systemen untersucht. Die Kristallisationskinetik und die Mikrostruktur werden abhängig von Größe und Anzahldichte der Seed-Partikel signifikant beeinflusst. In Übereinstimmung mit konfokalmikroskopischen Experimenten und Simulationen spielt dabei das Radienverhältnis der Majoritäts- zur Minoritätskomponente eine entscheidende Rolle.
Resumo:
Statically balanced compliant mechanisms require no holding force throughout their range of motion while maintaining the advantages of compliant mechanisms. In this paper, a postbuckled fixed-guided beam is proposed to provide the negative stiffness to balance the positive stiffness of a compliant mechanism. To that end, a curve decomposition modeling method is presented to simplify the large deflection analysis. The modeling method facilitates parametric design insight and elucidates key points on the force-deflection curve. Experimental results validate the analysis. Furthermore, static balancing with fixed-guided beams is demonstrated for a rectilinear proof-of-concept prototype.
Resumo:
There is a consensus in China that industrialization, urbanization, globalization and information technology will enhance China's urban competitiveness. We have developed a methodology for the analysis of urban competitiveness that we have applied to China's 25 principal cities during three periods from 1990 through 2009. Our model uses data for 12 variables, to which we apply appropriate statistical techniques. We are able to examine the competitiveness of inland cities and those on the coast, how this has changed during the two decades of the study, the competitiveness of Mega Cities and of administrative centres, and the importance of each variable in explaining urban competitiveness and its development over time. This analysis will be of benefit to Chinese planners as they seek to enhance the competitiveness of China and its major cities in the future.