213 resultados para CDA
Resumo:
The miscibility of blends of cellulose diacetate (CDA) and poly(vinyl pyrrolidone) (PVP) was extensively studied by means of differential thermal analysis and dynamic mechanical thermal analysis, tensile test, measuring viscosity of diluted and concentrated solutions of blends in acetone-ethanol mixture and morphological observations. A single glass transition temperature is observed, which is intermediate between the glass transition temperatures associated with each component and depends on composition. A synergism in mechanical properties of blends was found. The absolute viscosity and the intrinsic viscosity of solutions of blends are much higher than the weight average values of solutions of CDA and PVP. Optically clear and thermodynamically stable films were formed in the composition range of CDA/PVP = 100/0 to 50/50w/w. Fourier transform infrared was used to investigate the nature of CDA-PVP interaction. Hydrogen bonds were formed between hydroxyl groups of CDA and carbonyl groups of PVP. Heats of solutions of CDA/PVP blends and their mechanical mixtures were measured by using a calorimeter. Mixing enthalpy obtained with Hess's law approach was used to quantify interaction parameters of this blending system. It was found that mixing enthalpies and interaction parameters were negative and composition dependent. (C) 1997 Elsevier Science Ltd.
Resumo:
The aim of this work is to describe the most recent achievements in the field of the physical chemistry of mixing. The systems studied have been classified according to the amount of thermic effect due to the blending and its interpretation. When polystyrene (PS) and poly(alpha-methylstyrene) (P alpha MS) are blended, the interaction is weak and Delta(mix)H is close to zero. The presence of polar atoms and/or groups increases the stability of the blend and, therefore, Delta(mix)H becomes more negative. Poly(ethylene oxide) (PEO), poly(methyl acrylate) (PMA), poly(methyl methacrylate) (PMMA) and poly(vinylacetate) (PVAc), when mixed to form binary systems, show large differences from their properties when pure. If hydrogen bonding takes place, the interactions are readily detected and a large effect is calorimetrically determined. Cellulose diacetate (CDA) and poly(vinylpyrrolidone) (PVP) have been studied as an example of a strongly interacting system.
Resumo:
3 hojas.
Resumo:
For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.
Resumo:
Parallel computing on a network of workstations can saturate the communication network, leading to excessive message delays and consequently poor application performance. We examine empirically the consequences of integrating a flow control protocol, called Warp control [Par93], into Mermera, a software shared memory system that supports parallel computing on distributed systems [HS93]. For an asynchronous iterative program that solves a system of linear equations, our measurements show that Warp succeeds in stabilizing the network's behavior even under high levels of contention. As a result, the application achieves a higher effective communication throughput, and a reduced completion time. In some cases, however, Warp control does not achieve the performance attainable by fixed size buffering when using a statically optimal buffer size. Our use of Warp to regulate the allocation of network bandwidth emphasizes the possibility for integrating it with the allocation of other resources, such as CPU cycles and disk bandwidth, so as to optimize overall system throughput, and enable fully-shared execution of parallel programs.
Resumo:
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Programmers of parallel processes that communicate through shared globally distributed data structures (DDS) face a difficult choice. Either they must explicitly program DDS management, by partitioning or replicating it over multiple distributed memory modules, or be content with a high latency coherent (sequentially consistent) memory abstraction that hides the DDS' distribution. We present Mermera, a new formalism and system that enable a smooth spectrum of noncoherent shared memory behaviors to coexist between the above two extremes. Our approach allows us to define known noncoherent memories in a new simple way, to identify new memory behaviors, and to characterize generic mixed-behavior computations. The latter are useful for programming using multiple behaviors that complement each others' advantages. On the practical side, we show that the large class of programs that use asynchronous iterative methods (AIM) can run correctly on slow memory, one of the weakest, and hence most efficient and fault-tolerant, noncoherence conditions. An example AIM program to solve linear equations, is developed to illustrate: (1) the need for concurrently mixing memory behaviors, and, (2) the performance gains attainable via noncoherence. Other program classes tolerate weak memory consistency by synchronizing in such a way as to yield executions indistinguishable from coherent ones. AIM computations on noncoherent memory yield noncoherent, yet correct, computations. We report performance data that exemplifies the potential benefits of noncoherence, in terms of raw memory performance, as well as application speed.
Resumo:
ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.
Resumo:
A new region-based approach to nonrigid motion tracking is described. Shape is defined in terms of a deformable triangular mesh that captures object shape plus a color texture map that captures object appearance. Photometric variations are also modeled. Nonrigid shape registration and motion tracking are achieved by posing the problem as an energy-based, robust minimization procedure. The approach provides robustness to occlusions, wrinkles, shadows, and specular highlights. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. This enables nonrigid tracking at speeds approaching video rate.
Resumo:
This paper examines how and why web server performance changes as the workload at the server varies. We measure the performance of a PC acting as a standalone web server, running Apache on top of Linux. We use two important tools to understand what aspects of software architecture and implementation determine performance at the server. The first is a tool that we developed, called WebMonitor, which measures activity and resource consumption, both in the operating system and in the web server. The second is the kernel profiling facility distributed as part of Linux. We vary the workload at the server along two important dimensions: the number of clients concurrently accessing the server, and the size of the documents stored on the server. Our results quantify and show how more clients and larger files stress the web server and operating system in different and surprising ways. Our results also show the importance of fixed costs (i.e., opening and closing TCP connections, and updating the server log) in determining web server performance.
Resumo:
ImageRover is a search by image content navigation tool for the world wide web. The staggering size of the WWW dictates certain strategies and algorithms for image collection, digestion, indexing, and user interface. This paper describes two key components of the ImageRover strategy: image digestion and relevance feedback. Image digestion occurs during image collection; robots digest the images they find, computing image decompositions and indices, and storing this extracted information in vector form for searches based on image content. Relevance feedback occurs during index search; users can iteratively guide the search through the selection of relevant examples. ImageRover employs a novel relevance feedback algorithm to determine the weighted combination of image similarity metrics appropriate for a particular query. ImageRover is available and running on the web site.
Resumo:
Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.
Resumo:
A combined 2D, 3D approach is presented that allows for robust tracking of moving bodies in a given environment as observed via a single, uncalibrated video camera. Tracking is robust even in the presence of occlusions. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that combines low-level (image processing) and mid-level (recursive trajectory estimation) information obtained during the tracking process. The resulting system can segment and maintain the tracking of moving objects before, during, and after occlusion. At each frame, the system also extracts a stabilized coordinate frame of the moving objects. This stabilized frame is used to resize and resample the moving blob so that it can be used as input to motion recognition modules. The approach enables robust tracking without constraining the system to know the shape of the objects being tracked beforehand; although, some assumptions are made about the characteristics of the shape of the objects, and how they evolve with time. Experiments in tracking moving people are described.
Resumo:
El desplazamiento del área de siembra hacia el sur y los requerimientos del mercado obligaron a la introducción de nuevas variedades de maní en reemplazo de las tradicionales. Sin embargo no se han evaluado las tres variedades más difundidas en relación a su comportamiento a campo. El objetivo de este trabajo fue comparar tres variedades comerciales de maní (Arachis hypogaea L.) en términos de fenología, rendimiento y calidad granométrica en condiciones de campo para la zona central de Córdoba. Se llevó a cabo la siembra a campo de las variedades GRANOLEICO, ASEM 484 y ASEM 485. Se evaluaron: a) parámetros fenológicos: emergencia de plántulas, ciclo de las variedades (R1 y R5), estado de madurez de las variedades, b) parámetros productivos: rendimiento (kg/ha) y calidad granométrica, c) análisis económico. De acuerdo a los resultados obtenidos se concluye que la variedad GRANOLEICO requiere mayor tiempo térmico (ºCdía) para cumplir las mismas etapas fenológicas, si bien el rendimiento es similar al obtenido por las variedades ASEM 484 y 485. Sin embargo estas variedades presentan mayor calidad granométrica. La variedad GRANOLEICO obtuvo un (- 16,30 por ciento) y la variedad ASEM 485 un (- 0,46 por ciento) de ingreso ($/ha) respecto de la variedad ASEM 484. Por lo expuesto se sugiere la correcta elección de las variedades de maní, de acuerdo al área y fecha de siembra.
Resumo:
A fin de evaluar la posibilidad de reducir costos de alimentación en cultivo de Ramdia quelen, se realizaron dos ensayos experimentales. Uno orientado al cálculo de la digestibilidad in vivo de diferentes dietas con el fin de analizar como afecta el remplazo de la harina de pescado en su digestibilidad proteica. Por otro lado, un ensayo de crecimiento en jaulas para calcular su desempeño productivo. Ambas experiencias fueron realizadas en el Centro Nacional de Desarrollo Acuícola (provincia de Corrientes, 27°32´S,58°30´W) utilizando dos dietas experimentales (15 y 11 por ciento de harina de pescado) junto a un Control (20 por ciento). Para los estudios de digestibilidad se utilizó Cr2O3 como marcador inerte, recolectando las heces en tanques cilindro-cónicos de 150 L conectados a una columna de decantación. Sólo fueron observadas diferencias significativas utilizando p=0,1 (P = 0,0764) en los valores de CDA de la proteína obtenidos entre el Control y la D2, sin observarse diferencias entre estas y la D1. La experiencia en campo se desarrolló en jaulas de 1 m3, con peces de un Peso Inicial promedio aproximado de 28 g, a una densidad de 300 individuos/jaula,durante 197 días de cultivo. Los Pesos Finales promediaron 302,81; 287,07 y 273,39 g para las dietas Control, D1 y D2, respectivamente, observando diferencias significativas (P menor a 0.05) en el IPD, la TEP obtenida con la dieta Control superó a la de la D2 (P menor a 0.05) y no observándose diferencias significativas (P mayor a 0.05) en el FCR alcanzado con las diferentes dietas. Al analizarse los rendimientos obtenidos y los costos de las raciones suministradas, puede evidenciarse que si bien a medida que se reemplaza la proteína de origen animal, el precio por tonelada de dieta elaborada se reduce levemente,este se incrementa al analizar el costo del alimento por tonelada de pescado producido debido a un menor desempeño productivo de los peces.