867 resultados para history topology intuitionism constructivism philosophy of geometry physical continuum topological space descriptive definitions axiomatic


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One influential image that is popular among scientists is the view that mathematics is the language of nature. The present article discusses another possible way to approach the relation between mathematics and nature, which is by using the idea of information and the conceptual vocabulary of cryptography. This approach allows us to understand the possibility that secrets of nature need not be written in mathematics and yet mathematics is necessary as a cryptographic key to unlock these secrets. Various advantages of such a view are described in this article.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With an objective to understand the nature of forces which contribute to the disjoining pressure of a thin water film on a steel substrate being pressed by an oil droplet, two independent sets of experiments were done. (i) A spherical silica probe approaches the three substrates; mica, PTFE and steel, in a 10 mM electrolyte solution at two different pHs (3 and 10). (ii) The silica probe with and without a smeared oil film approaches the same three substrates in water (pH = 6). The surface potential of the oil film/water was measured using a dynamic light scattering experiment. Assuming the capacity of a substrate for ion exchange the total interaction force for each experiment was estimated to include the Derjaguin-Landau-Verwey-Overbeek (DLVO) force, hydration repulsion, hydrophobic attraction and oil-capillary attraction. The best fit of these estimates to the force-displacement characteristics obtained from the two sets of experiment gives the appropriate surface potentials of the substrates. The procedure allows an assessment of the relevance of a specific physical interaction to an experimental configuration. Two of the principal observations of this work are: (i) The presence of a surface at constant charge, as in the presence of an oil film on the probe, significantly enhances the counterion density over what is achieved when both the surfaces allow ion exchange. This raises the corresponding repulsion barrier greatly. (ii) When the substrate surface is wettable by oil, oil-capillary attraction contributes substantially to the total interaction. If it is not wettable the oil film is deformed and squeezed out. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experimental setup using radiative heating has been used to understand the thermo-physical phenomena and chemical transformations inside acoustically levitated cerium nitrate precursor droplets. In this transformation process, through infrared thermography and high speed imaging, events such as vaporization, precipitation and chemical reaction have been recorded at high temporal resolution, leading to nanoceria formation with a porous morphology. The cerium nitrate droplet undergoes phase and shape changes throughout the vaporization process. Four distinct stages were delineated during the entire vaporization process namely pure evaporation, evaporation with precipitate formation, chemical reaction with phase change and formation of final porous precipitate. The composition was examined using scanning and transmission electron microscopy that revealed nanostructures and confirmed highly porous morphology with trapped gas pockets. Transmission electron microscopy (TEM) and high speed imaging of the final precipitate revealed the presence of trapped gases in the form of bubbles. TEM also showed the presence of nanoceria crystalline structures at 70 degrees C. The current study also looked into the effect of different heating powers on the process. At higher power, each phase is sustained for smaller duration and higher maximum temperature. In addition, the porosity of the final precipitate increased with power. A non-dimensional time scale is proposed to correlate the effect of laser intensity and vaporization rate of the solvent (water). The effect of acoustic levitation was also studied. Due to acoustic streaming, the solute selectively gets transported to the bottom portion of the droplet due to strong circulation, providing it rigidity and allows it become bowl shaped. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study analyzes the effort to build political legitimacy in the Republic of Turkey by ex-ploring a group of influential texts produced by Kemalist writers. The study explores how the Kemalist regime reproduced certain long-lasting enlightenment meta-narrative in its effort to build political legitimacy. Central in this process was a hegemonic representation of history, namely the interpretation of the Anatolian Resistance Struggle of 1919 1922 as a Turkish Revolution executing the enlightenment in the Turkish nation-state. The method employed in the study is contextualizing narratological analysis. The Kemalist texts are analyzed with a repertoire of concepts originally developed in the theory of narra-tive. By bringing these concepts together with epistemological foundations of historical sciences, the study creates a theoretical frame inside of which it is possible to highlight how initially very controversial historical representations in the end manage to construct long-lasting, emotionally and intellectually convincing bases of national identity for the secular middle classes in Turkey. The two most important explanatory concepts in this sense are di-egesis and implied reader. The diegesis refers to the ability of narrative representation to create an inherently credible story-world that works as the basis of national community. The implied reader refers to the process where a certain hegemonic narrative creates a formula of identification and a position through which any individual real-world reader of a story can step inside the narrative story-world and identify oneself as one of us of the national narra-tive. The study demonstrates that the Kemalist enlightenment meta-narrative created a group of narrative accruals which enabled generations of secular middle classes to internalize Kemalist ideology. In this sense, the narrative in question has not only worked as a tool utilized by the so-called Kemalist state-elite to justify its leadership, but has been internalized by various groups in Turkey, working as their genuine world-view. It is shown in the study that secular-ism must be seen as the core ingredient of these groups national identity. The study proposes that the enlightenment narrative reproduced in the Kemalist ideology had its origin in a simi-lar totalizing cultural narrative created in and for Europe. Currently this enlightenment project is challenged in Turkey by those who are in an attempt to give religion a greater role in Turkish society. The study argues that the enduring practice of legitimizing political power through the enlightenment meta-narrative has not only become a major factor contributing to social polarization in Turkey, but has also, in contradiction to the very real potentials for crit-ical approaches inherent in the Enlightenment tradition, crucially restricted the development of critical and rational modes of thinking in the Republic of Turkey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance analysis of adaptive physical layer network-coded two-way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. The deep channel fade conditions which occur at the relay referred as the singular fade states fall in the following two classes: (i) removable and (ii) non-removable singular fade states. With every singular fade state, we associate an error probability that the relay transmits a wrong network-coded symbol during the BC phase. It is shown that adaptive network coding provides a coding gain over fixed network coding, by making the error probabilities associated with the removable singular fade states contributing to the average Symbol Error Rate (SER) fall as SNR-2 instead of SNR-1. A high SNR upper-bound on the average end-to-end SER for the adaptive network coding scheme is derived, for a Rician fading scenario, which is found to be tight through simulations. Specifically, it is shown that for the adaptive network coding scheme, the probability that the relay node transmits a wrong network-coded symbol is upper-bounded by twice the average SER of a point-to-point fading channel, at high SNR. Also, it is shown that in a Rician fading scenario, it suffices to remove the effect of only those singular fade states which contribute dominantly to the average SER.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a methodology for designing a compliant aircraft wing, which can morph from a given airfoil shape to another given shape under the actuation of internal forces and can offer sufficient stiffness in both configurations under the respective aerodynamic loads. The least square error in displacements, Fourier descriptors, geometric moments, and moment invariants are studied to compare candidate shapes and to pose the optimization problem. Their relative merits and demerits are discussed in this paper. The `frame finite element ground structure' approach is used for topology optimization and the resulting solutions are converted to continuum solutions. The introduction of a notch-like feature is the key to the success of the design. It not only gives a good match for the target morphed shape for the leading and trailing edges but also minimizes the extension of the flexible skin that is to be put on the airfoil frame. Even though linear small-displacement elastic analysis is used in optimization, the obtained designs are analysed for large displacement behavior. The methodology developed here is not restricted to aircraft wings; it can be used to solve any shape-morphing requirement in flexible structures and compliant mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of modulation schemes for the physical layer network-coded two way relaying scenario is presented which employs two phases: Multiple access (MA) phase and Broadcast (BC) phase. Depending on the signal set used at the end nodes, the minimum distance of the effective constellation seen at the relay becomes zero for a finite number of channel fade states referred as the singular fade states. The singular fade states fall into the following two classes: (i) the ones which are caused due to channel outage and whose harmful effect cannot be mitigated by adaptive network coding called the non-removable singular fade states and (ii) the ones which occur due to the choice of the signal set and whose harmful effects can be removed called the removable singular fade states. In this paper, we derive an upper bound on the average end-to-end Symbol Error Rate (SER), with and without adaptive network coding at the relay, for a Rician fading scenario. It is shown that without adaptive network coding, at high Signal to Noise Ratio (SNR), the contribution to the end-to-end SER comes from the following error events which fall as SNR-1: the error events associated with the removable and nonremovable singular fade states and the error event during the BC phase. In contrast, for the adaptive network coding scheme, the error events associated with the removable singular fade states fall as SNR-2, thereby providing a coding gain over the case when adaptive network coding is not used. Also, it is shown that for a Rician fading channel, the error during the MA phase dominates over the error during the BC phase. Hence, adaptive network coding, which improves the performance during the MA phase provides more gain in a Rician fading scenario than in a Rayleigh fading scenario. Furthermore, it is shown that for large Rician factors, among those removable singular fade states which have the same magnitude, those which have the least absolute value of the phase - ngle alone contribute dominantly to the end-to-end SER and it is sufficient to remove the effect of only such singular fade states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over 100 molluscan species are landed in Mexico. About 30% are harvested on the Pacific coast and 70% on the Atlantic coast. Clams, scallops, and squid predominate on the Pacific coast (abalone, limpets, and mussels are landed there exclusively). Conchs and oysters predominate on the Atlantic coast. In 1988, some 95,000 metric tons (t) of mollusks were landed, with a value of $33 million. Mollusks were used extensively in prehispanic Mexico as food, tools, and jewelry. Their use as food and jewelry continues. Except in the States of Baja California and Baja California Sur, where abalone, clams, and scallops provide fishermen with year-round employment, mollusk fishing is done part time. On both the Pacific and Atlantic coasts, many fishermen are nomads, harvesting mollusks wherever they find abundant stocks. Upon finding such beds, they build camps, begin harvesting, and continue until the mollusks become so scarce that it no longer pays to continue. They then look for productive beds in other areas and rebuild their camps. Fishermen harvest abalones, mussels, scallops, and clams by free-diving and using scuba and hooka. Landings of clams and cockles have been growing, and 22,000 t were landed in 1988. Fishermen harvest intertidal clams by hand at wading depths, finding them with their feet. In waters up to 5 m, they harvest them by free-diving. In deeper water, they use scuba and hooka. Many species of gastropods have commercial importance on both coasts. All species with a large detachable muscle are sold as scallops. On the Pacific coast, hatchery culture of oysters prevails. Oyster culture in Atlantic coast lagoons began in the 1950's, when beds were enhanced by spreading shells as cultch for spat. (PDF file contains 228 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This three-volume monograph represents the first major attempt in over a century to provide, on regional bases, broad surveys of the history, present condition, and future of the important shellfisheries of North and Central America and Europe. It was about 100 years ago that Ernest Ingersoll wrote extensively about several molluscan fisheries of North America (1881, 1887) and about 100 years ago that Bashford Dean wrote comprehensively about methods of oyster culture in Europe (1893). Since those were published, several reports, books, and pamphlets have been written about the biology and management of individual species or groups ofclosely related mollusk species (Galtsoff, 1964; Korringa, 1976 a, b, c; Lutz, 1980; Manzi and Castagna, 1989; Shumway, 1991). However, nothing has been written during the past century that is comparable to the approach used by Ingersoll in describing the molluscan fisheries as they existed in his day in North America or, for that matter, in Europe. (PDF file contains 224 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

<p>The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.</p> <p>The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.</p> <p>We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.</p>