906 resultados para Returns to scale
Resumo:
The corporate world is increasingly competitive, and companies need to go deep into the routines and work them in order to understand them fully. The market is demanding more than simple improvements that bring advances - of small or great expression; however, in a longer term it will no longer meet the ideology of the market. Companies aimed at the world class must focus on projects that will continually bring returns to the company. As previously mentioned, understanding the processes in minute details is of paramount importance, as this knowledge can be acquired by analyzing the decisions that are necessary during the process. Once the complexity increases, the quantity and difficulty of the criteria that influence them grow accordingly. At this time, methods and tools that assist decisionmaking processes can be used as, besides being able to provide the best decision methods of MCDA (Multiple Criteria Decision Aid), they provide clear and assertive understanding of the whole decision process. In developing this study, we sought to explore the AHP (Analytic Hierarchy Process) method (a MCDA method) in the choice of access service, featured by the support service used to reach and be the basis of repairs in places of difficult access. This work proposes a study of the quantitative modeling approach in a real routine activity for a Brazilian petrochemical company. Decision-making processes are explored when we seek to analyze not only the decision makers but also what directly influences them on the use of the AIJ method. Once this is achieved, the understanding of decision-making is substantiated
Resumo:
Over the past two years, Brazil has been facing a major water crisis in its history and the state of Sao Paulo is the one that has been going through worst difficulties. In this scenario, all water users should do everything possible so that the consumption of water resources is carried out in a sustainable manner. In this context, the companies responsible for the public water supply must increase the efficiency of water resource management. It is indispensable combating losses in the public supply system. When there is a non-visible leak in a pipe, the wastewater volume can be high, but in this case, the water returns to nature and continues to participate in the hydrological cycle. The economic loss corresponds to the value added to the product water, which includes the intrinsic costs of exploration, processing and distribution. This damage results in a reduced availability of financial resources of sanitation companies to invest in environmentally friendly solutions. This study aimed to diagnose the water distribution system in the city of Guaratinguetá (SP), held by the Companhia de Serviços de Água, Esgoto e Resíduos de Guaratinguetá (SAEG), to propose measures to combat water loss. Among the proposed measures, there is the monitoring of losses, planning for replacement of old pipes and company awareness as a whole in relation to combat water losses
Resumo:
The viscosity of AOT/water/decane water-in-oil microemulsions exhibits a well-known maximum as a function of water/AOT molar ratio, which is usually attributed to increased attractions among nearly spherical droplets. The maximum can be removed by adding salt or by changing the oil to CCl4. Systematic small-angle X-ray scattering (SAXS) measurements have been used to monitor the structure of the microemulsion droplets in the composition regime where the maximum appears. On increasing the droplet concentration, the scattering intensity is found to scale with the inverse of the wavevector, a behavior which is consistent with cylindrical structures. The inverse wavevector scaling is not observed when the molar ratio is changed, moving the system away from the value corresponding to the viscosity maximum. It is also not present in the scattering from systems containing enough added salt to essentially eliminate the viscosity maximum. An asymptotic analysis of the SAXS data, complemented by some quantitative modeling, is consistent with cylindrical growth of droplets as their concentration is increased. Such elongated structures are familiar from related AOT systems in which the sodium counterion has been exchanged for a divalent one. However, the results of this study suggest that the formation of non-spherical aggregates at low molar ratios is an intrinsic property of AOT.
Resumo:
By means of nuclear spin-lattice relaxation rate T-1(-1), we follow the spin dynamics as a function of the applied magnetic field in two gapped quasi-one-dimensional quantum antiferromagnets: the anisotropic spin-chain system NiCl2-4SC(NH2)(2) and the spin-ladder system (C5H12N)(2)CuBr4. In both systems, spin excitations are confirmed to evolve from magnons in the gapped state to spinons in the gapless Tomonaga-Luttinger-liquid state. In between, T-1(-1) exhibits a pronounced, continuous variation, which is shown to scale in accordance with quantum criticality. We extract the critical exponent for T-1(-1), compare it to the theory, and show that this behavior is identical in both studied systems, thus demonstrating the universality of quantum-critical behavior.
Resumo:
The aim of this Account is to provide an overview of our current research activities on the design and modification of superparamagnetic nanomaterials for application in the field of magnetic separation and catalysis. First, an introduction of magnetism and magnetic separation is done. Then, the synthetic strategies that have been developed for generating superparamagnetic nanoparticles spherically coated by silica and other oxides, with a focus on well characterized systems prepared by methods that generate samples of high quality and easy to scale- up, are discussed. A set of magnetically recoverable catalysts prepared in our research group by the unique combination of superparamagnetic supports and metal nanoparticles is highlighted. This Account is concluded with personal remarks and perspectives on this research field.
Resumo:
Purpose: To evaluate the effect of a single intravitreal bevacizumab injection on visual acuity, contrast sensitivity and optical coherence tomography-measured central macular thickness in eyes with macular edema from branch retinal vein occlusion. Methods: Seventeen eyes of 17 patients with macular edema from unilateral branch retinal vein occlusion were treated with a single bevacizumab injection. Patients were submitted to a complete evaluation including best corrected visual acuity, contrast sensitivity and optical coherence tomography measurements before treatment and one and three months after injection. Visual acuity, contrast sensitivity and optical coherence tomography measurements were compared to baseline values. Results: Mean visual acuity measurement improved from 0.77 logMAR at baseline to 0.613 logMAR one month after injection (P=0.0001) but worsened to 0.75 logMAR after three months. Contrast sensitivity test demonstrated significant improvement at spatial frequencies of 3, 6, 12 and 18 cycles/degree one month after injection and at the spatial frequency of 12 cycles/degree three months after treatment. Mean +/- standard deviation baseline central macular thickness (552 +/- 150 mu m) reduced significantly one month (322 +/- 127 mu m, P=0.0001) and three months (439 perpendicular to 179 mu m, P=0.01) after treatment. Conclusions: Bevacizumab injection improves visual acuity and contrast sensitivity and reduces central macular thickness one month after treatment. Visual acuity returns to baseline levels at the 3-month follow-up, but some beneficial effect of the treatment is still present at that time, as evidenced by optical coherence tomography-measured central macular thickness and contrast sensitivity measurements.
Resumo:
Bromelain is an aqueous extract of pineapple that contains a complex mixture of proteases and non-protease components. These enzymes perform an important role in proteolytic modulation of the cellular matrix in numerous physiologic processes, including anti-inflammatory, anti-thrombotic and fibrinolytic functions. Due to the scale of global production of pineapple (Ananas comosus L.), and the high percentage of waste generated in their cultivation and processing, several studies have been conducted on the recovery of bromelain. The aim of this study was to purify bromelain from pineapple wastes using an easy-to-scale-up process of precipitation by ethanol. The results showed that bromelain was recovered by using ethanol at concentrations of 30% and 70%, in which a purification factor of 2.28 fold was achieved, and yielded more than 98% of the total enzymatic activity. This enzyme proved to be susceptible to denaturation after the lyophilization process. However, by using 10% (w/v) glucose as a cryoprotector, it was possible to preserve 90% of the original enzymatic activity. The efficiency of the purification process was confirmed by SDS-PAGE, and native-PAGE electrophoresis, fluorimetry, circular dichroism and FTIR analyzes, showing that this method could be used to obtain highly purified and structurally stable bromelain. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
The aim of this Account is to provide an overview of our current research activities on the design and modification of superparamagnetic nanomaterials for application in the field of magnetic separation and catalysis. First, an introduction of magnetism and magnetic separation is done. Then, the synthetic strategies that have been developed for generating superparamagnetic nanoparticles spherically coated by silica and other oxides, with a focus on well characterized systems prepared by methods that generate samples of high quality and easy to scale-up, are discussed. A set of magnetically recoverable catalysts prepared in our research group by the unique combination of superparamagnetic supports and metal nanoparticles is highlighted. This Account is concluded with personal remarks and perspectives on this research field.
Resumo:
OBJECTIVE: The participation of humans in clinical cardiology trials remains essential, but little is known regarding participant perceptions of such studies. We examined the factors that motivated participation in such studies, as well as those that led to participant frustration. METHODS: Patients who had participated in hypertension and coronary arterial disease (phases II, III, and IV) clinical trials were invited to answer a questionnaire. They were divided into two groups: Group I, which included participants in placebo-controlled clinical trials after randomization, and Group II, which included participants in clinical trials in which the tested treatment was compared to another drug after randomization and in which a placebo was used in the washout period. RESULTS: Eighty patients (47 patients in Group I and 33 patients in Group II) with different socio-demographic characteristics were interviewed. Approximately 60% of the patients were motivated to participate in the trial with the expectation of personal benefit. Nine participants (11.2%) expressed the desire to withdraw, which was due to their perception of risk during the testing in the clinical trial (Group I) and to the necessity of repeated returns to the institution (Group II). However, the patients did not withdraw due to fear of termination of hospital treatment. CONCLUSIONS: Although this study had a small patient sample, the possibility of receiving a benefit from the new tested treatment was consistently reported as a motivation to participate in the trials.
Resumo:
This thesis is a collection of five independent but closely related studies. The overall purpose is to approach the analysis of learning outcomes from a perspective that combines three major elements, namely lifelonglifewide learning, human capital, and the benefits of learning. The approach is based on an interdisciplinary perspective of the human capital paradigm. It considers the multiple learning contexts that are responsible for the development of embodied potential – including formal, nonformal and informal learning – and the multiple outcomes – including knowledge, skills, economic, social and others– that result from learning. The studies also seek to examine the extent and relative influence of learning in different contexts on the formation of embodied potential and how in turn that affects economic and social well being. The first study combines the three major elements, lifelonglifewide learning, human capital, and the benefits of learning into one common conceptual framework. This study forms a common basis for the four empirical studies that follow. All four empirical studies use data from the International Adult Literacy Survey (IALS) to investigate the relationships among the major elements of the conceptual framework presented in the first study. Study I. A conceptual framework for the analysis of learning outcomes This study brings together some key concepts and theories that are relevant for the analysis of learning outcomes. Many of the concepts and theories have emerged from varied disciplines including economics, educational psychology, cognitive science and sociology, to name only a few. Accordingly, some of the research questions inherent in the framework relate to different disciplinary perspectives. The primary purpose is to create a common basis for formulating and testing hypotheses as well as to interpret the findings in the empirical studies that follow. In particular, the framework facilitates the process of theorizing and hypothesizing on the relationships and processes concerning lifelong learning as well as their antecedents and consequences. Study II. Determinants of literacy proficiency: A lifelong-lifewide learning perspective This study investigates lifelong and lifewide processes of skill formation. In particular, it seeks to estimate the substitutability and complementarity effects of learning in multiple settings over the lifespan on literacy skill formation. This is done by investigating the predictive capacity of major determinants of literacy proficiency that are associated with a variety of learning contexts including school, home, work, community and leisure. An identical structural model based on previous research is fitted to the IALS data for 18 countries. The results show that even after accounting for all factors, education remains the most important predictor of literacy proficiency. In all countries, however, the total effect of education is significantly mediated through further learning occurring at work, at home and in the community. Therefore, the job and other literacy related factors complement education in predicting literacy proficiency. This result points to a virtual cycle of lifelong learning, particularly to how educational attainment influences other learning behaviours throughout life. In addition, results show that home background as measured by parents’ education is also a strong predictor of literacy proficiency, but in many countries this occurs only if a favourable home background is complemented with some post-secondary education. Study III. The effect of literacy proficiency on earnings: An aggregated occupational approach using the Canadian IALS data This study uses data from the Canadian Adult Literacy Survey to estimate the earnings return to literacy skills. The approach adapts a labour segmented view of the labour market by aggregating occupations into seven types, enabling the estimation of the variable impact of literacy proficiency on earnings, both within and between different types of occupations. This is done using Hierarchical Linear Modeling (HLM). The method used to construct the aggregated occupational classification is based on analysis that considers the role of cognitive and other skills in relation to the nature of occupational tasks. Substantial premiums are found to be associated with some occupational types even after adjusting for within occupational differences in individual characteristics such as schooling, literacy proficiency, labour force experience and gender. Average years of schooling and average levels of literacy proficiency at the between level account for over two-thirds of the premiums. Within occupations, there are significant returns to schooling but they vary depending on the type of occupations. In contrast, the within occupational return of literacy proficiency is not necessarily significant. The latter depends on the type of occupation. Study IV: Determinants of economic and social outcomes from a lifewide learning perspective in Canada In this study the relationship between learning in different contexts, which span the lifewide learning dimension, and individual earnings on the one hand and community participation on the other are examined in separate but comparable models. Data from the Canadian Adult Literacy Survey are used to estimate structural models, which correspond closely to the common conceptual framework outlined in Study I. The findings suggest that the relationship between formal education and economic and social outcomes is complex with confounding effects. The results indicate that learning occurring in different contexts and for different reasons leads to different kinds of benefits. The latter finding suggests a potential trade-off between realizing economic and social benefits through learning that are taken for either job-related or personal-interest related reasons. Study V: The effects of learning on economic and social well being: A comparative analysis Using the same structural model as in Study IV, hypotheses are comparatively examined using the International Adult Literacy Survey data for Canada, Denmark, the Netherlands, Norway, the United Kingdom, and the United States. The main finding from Study IV is confirmed for an additional five countries, namely that the effect of initial schooling on well being is more complex than a direct one and it is significantly mediated by subsequent learning. Additionally, findings suggest that people who devote more time to learning for job-related reasons than learning for personal-interest related reasons experience higher levels of economic well being. Moreover, devoting too much time to learning for personal-interest related reasons has a negative effect on earnings except in Denmark. But the more time people devote to learning for personal-interest related reasons tends to contribute to higher levels of social well being. These results again suggest a trade-off in learning for different reasons and in different contexts.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
The rational construction of the house. The writings and projects of Giuseppe Pagano Description, themes and research objectives The research aims at analysing the architecture of Giuseppe Pagano, which focuses on the theme of dwelling, through the reading of 3 of his house projects. On the one hand, these projects represent “minor” works not thoroughly known by Pagano’s contemporary critics; on the other they emphasise a particular methodological approach, which serves the author to explore a theme closely linked to his theoretical thought. The house project is a key to Pagano’s research, given its ties to the socio-cultural and political conditions in which the architect was working, so that it becomes a mirror of one of his specific and theoretical path, always in a state of becoming. Pagano understands architecture as a “servant of the human being”, subject to a “utilitarian slavery” since it is a clear, essential and “modest” answer to specific human needs, free from aprioristic aesthetic and formal choices. It is a rational architecture in sensu stricto; it constitutes a perfect synthesis between cause and effect and between function and form. The house needs to accommodate these principles because it is closely intertwined with human needs and intimately linked to a specific place, climatic conditions and technical and economical possibilities. Besides, differently from his public and common masterpieces such as the Palazzo Gualino, the Istituto di Fisica and the Università Commerciale Bocconi, the house projects are representative of a precise project will, which is expressed in a more authentic way, partially freed from political influences and dogmatic preoccupations and, therefore, far from the attempt to research a specific expressive language. I believe that the house project better represents that “ingenuity”, freshness and “sincerity” that Pagano identifies with the minor architecture, thereby revealing a more authentic expression of his understanding of a project. Therefore, the thesis, by tracing the theoretical research of Pagano through the analysis of some of his designed and built works, attempts to identify a specific methodological approach to Pagano’s project, which, developed through time, achieves a certain clarity in the 1930s. In fact, this methodological approach becomes more evident in his last projects, mainly regarding the house and the urban space. These reflect the attempt to respond to the new social needs and, at the same time, they also are an expression of a freer idea of built architecture, closely linked with the place and with the human being who dwells it. The three chosen projects (Villa Colli, La Casa a struttura d’acciaio and Villa Caraccio) make Pagano facing different places, different customers and different economic and technical conditions, which, given the author’s biography, correspond to important historical and political conditions. This is the reason why the projects become apparently distant works, both linguistically and conceptually, to the point that one can define them as ”eclectic”. However, I argue that this eclecticism is actually an added value to the architectural work of Pagano, steaming from the use of a method which, having as a basis the postulate of a rational architecture as essence and logic of building, finds specific variations depending on the multiple variables to be addressed by the project. This is the methodological heritage that Pagano learns from the tradition, especially that of the rural residential architecture, defined by Pagano as a “dictionary of the building logic of man”, as an “a-stylistic background”. For Pagano this traditional architecture is a clear expression of the relationships between a theme and its development, an architectural “fact” that is resolved with purely technical and utilitarian aims and with a spontaneous development far from any aprioristic theoretical principle. Architecture, therefore, cannot be an invention for Pagano and the personal contribution of each architect has to consider his/her close relationship with the specific historical context, place and new building methods. These are basic principles in the methodological approach that drives a great deal of his research and that also permits his thought to be modern. I argue that both ongoing and new collaborations with younger protagonists of the culture and architecture of the period are significant for the development of his methodology. These encounters represent the will to spread his own understanding of the “new architecture” as well as a way of self-renewal by confronting the self with new themes and realities and by learning from his collaborators. Thesis’ outline The thesis is divided in two principal parts, each articulated in four chapters attempting to offer a new reading of the theory and work of Pagano by emphasising the central themes of the research. The first chapter is an introduction to the thesis and to the theme of the rational house, as understood and developed in its typological and technical aspects by Pagano and by other protagonists of the Italian rationalism of the 1930s. Here the attention is on two different aspects defining, according to Pagano, the house project: on the one hand, the typological renewal, aimed at defining a “standard form” as a clear and essential answer to certain needs and variables of the project leading to different formal expressions. On the other, it focuses on the building, understood as a technique to “produce” architecture, where new technologies and new materials are not merely tools but also essential elements of the architectural work. In this way the villa becomes different from the theme of the common house or from that of the minimalist house, by using rules in the choice of material and in the techniques that are every time different depending on the theme under exploration and on the contingency of place. It is also visible the rigorous rationalism that distinguishes the author's appropriation of certain themes of rural architecture. The pages of “Casabella” and the events of the contemporary Triennali form the preliminary material for the writing of this chapter given that they are primary sources to individuate projects and writings produced by Pagano and contemporary architects on this theme. These writings and projects, when compared, reconstruct the evolution of the idea of the rational house and, specifically, of the personal research of Pagano. The second part regards the reading of three of Pagano’s projects of houses as a built verification of his theories. This section constitutes the central part of the thesis since it is aimed at detecting a specific methodological approach showing a theoretical and ideological evolution expressed in the vast edited literature. The three projects that have been chosen explore the theme of the house, looking at various research themes that the author proposes and that find continuity in the affirmation of a specific rationalism, focussed on concepts such as essentiality, utility, functionality and building honesty. These concepts guide the thought and the activities of Pagano, also reflecting a social and cultural period. The projects span from the theme of the villa moderna, Villa Colli, which, inspired by the architecture of North Europe, anticipates a specific rationalism of Pagano based on rigour, simplicity and essentiality, to the theme of the common house, Casa a struttura d’acciaio, la casa del domani, which ponders on the definition of new living spaces and, moreover, on new concepts of standardisation, economical efficiency and new materials responding to the changing needs of the modern society. Finally, the third project returns to the theme of the, Villa Caraccio, revisiting it with new perspectives. These perspectives find in the solution of the open plant, in the openness to nature and landscape and in the revisiting of materials and local building systems that idea of the freed house, which express clearly a new theoretical thought. Methodology It needs to be noted that due to the lack of an official Archive of Pagano’s work, the analysis of his work has been difficult and this explains the necessity to read the articles and the drawings published in the pages of «Casabella» and «Domus». As for the projects of Villa Colli and Casa a struttura d’acciaio, parts of the original drawings have been consulted. These drawings are not published and are kept in private archives of the collaborators of Pagano. The consultation of these documents has permitted the analysis of the cited works, which have been subject to a more complete reading following the different proposed solutions, which have permitted to understand the project path. The projects are analysed thought the method of comparison and critical reading which, specifically, means graphical elaborations and analytical schemes, mostly reconstructed on the basis of original projects but, where possible, also on a photographic investigation. The focus is on the project theme which, beginning with a specific living (dwelling) typology, finds variations because of the historico-political context in which Pagano is embedded and which partially shapes his research and theoretical thought, then translated in the built work. The analysis of the work follows, beginning, where possible, from a reconstruction of the evolution of the project as elaborated on the basis of the original documents and ending on an analysis of the constructive principles and composition. This second phase employs a methodology proposed by Pagano in his article Piante di ville, which, as expected, focuses on the plant as essential tool to identify the “true practical and poetic qualities of the construction”(Pagano, «Costruzioni-Casabella», 1940, p. 2). The reading of the project is integrated with the constructive analyses related to the technical aspects of the house which, in the case of Casa a struttura d’acciaio, play an important role in the project, while in Villa Colli and in Villa Caraccio are principally linked to the choice of materials for the construction of the different architectural elements. These are nonetheless key factors in the composition of the work. Future work could extend this reading to other house projects to deepen the research that could be completed with the consultation of Archival materials, which are missing at present. Finally, in the appendix I present a critical selection of the Pagano’s writings, which recall the themes discussed and embodied by the three projects. The texts have been selected among the articles published in Casabella and in other journals, completing the reading of the project work which cannot be detached from his theoretical thought. Moving from theory to project, we follow a path that brings us to define and deepen the central theme of the thesis: rational building as the principal feature of the architectural research of Pagano, which is paraphrased in multiple ways in his designed and built works.
Resumo:
Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.
Resumo:
Topologische Beschränkungen beeinflussen die Eigenschaften von Polymeren. Im Rahmen dieser Arbeit wird mit Hilfe von Computersimulationen im Detail untersucht, inwieweit sich die statischen Eigenschaften von kollabierten Polymerringen, Polymerringen in konzentrierten Lösungen und aus Polymerringen aufgebauten Bürsten mit topologischen Beschränkungen von solchen ohne topologische Beschränkungen unterscheiden. Des Weiteren wird analysiert, welchen Einfluss geometrische Beschränkungen auf die topologischen Eigenschaften von einzelnen Polymerketten besitzen. Im ersten Teil der Arbeit geht es um den Einfluss der Topologie auf die Eigenschaften einzelner Polymerketten in verschiedenen Situationen. Da allerdings gerade die effiziente Durchführung von Monte-Carlo-Simulationen von kollabierten Polymerketten eine große Herausforderung darstellt, werden zunächst drei Bridging-Monte-Carlo-Schritte für Gitter- auf Kontinuumsmodelle übertragen. Eine Messung der Effizienz dieser Schritte ergibt einen Beschleunigungsfaktor von bis zu 100 im Vergleich zum herkömmlichen Slithering-Snake-Algorithmus. Darauf folgt die Analyse einer einzelnen, vergröberten Polystyrolkette in sphärischer Geometrie hinsichtlich Verschlaufungen und Knoten. Es wird gezeigt, dass eine signifikante Verknotung der Polystrolkette erst eintritt, wenn der Radius des umgebenden Kapsids kleiner als der Gyrationsradius der Kette ist. Des Weiteren werden sowohl Monte-Carlo- als auch Molekulardynamiksimulationen sehr großer Ringe mit bis zu einer Million Monomeren im kollabierten Zustand durchgeführt. Während die Konfigurationen aus den Monte-Carlo-Simulationen aufgrund der Verwendung der Bridging-Schritte sehr stark verknotet sind, bleiben die Konfigurationen aus den Molekulardynamiksimulationen unverknotet. Hierbei zeigen sich signifikante Unterschiede sowohl in der lokalen als auch in der globalen Struktur der Ringpolymere. Im zweiten Teil der Arbeit wird das Skalierungsverhalten des Gyrationsradius der einzelnen Polymerringe in einer konzentrierten Lösung aus völlig flexiblen Polymerringen im Kontinuum untersucht. Dabei wird der Anfang des asymptotischen Skalierungsverhaltens, welches mit dem Modell des “fractal globules“ konsistent ist, erreicht. Im abschließenden, dritten Teil dieser Arbeit wird das Verhalten von Bürsten aus linearen Polymeren mit dem von Ringpolymerbürsten verglichen. Dabei zeigt sich, dass die Struktur und das Skalierungsverhalten beider Systeme mit identischem Dichteprofil parallel zum Substrat deutlich voneinander abweichen, obwohl die Eigenschaften beider Systeme in Richtung senkrecht zum Substrat übereinstimmen. Der Vergleich des Relaxationsverhaltens einzelner Ketten in herkömmlichen Polymerbürsten und Ringbürsten liefert keine gravierenden Unterschiede. Es stellt sich aber auch heraus, dass die bisher verwendeten Erklärungen zur Relaxationsverhalten von herkömmlichen Bürsten nicht ausreichen, da diese lediglich den anfänglichen Zerfall der Korrelationsfunktion berücksichtigen. Bei der Untersuchung der Dynamik einzelner Monomere in einer herkömmlichen Bürste aus offenen Ketten vom Substrat hin zum offenen Ende zeigt sich, dass die Monomere in der Mitte der Kette die langsamste Relaxation besitzen, obwohl ihre mittlere Verrückung deutlich kleiner als die der freien Endmonomere ist.