909 resultados para Microwaves, Curing Time, Epoxy Resins, Rapid Product Development
Resumo:
The studies described here base mainly on sedimentary material collected during the "Indian Ocean Expedition" of the German research vessel "Meteor" in the region of the Indian-Pakistan continental margin in February and March 1965. Moreover,samples from the mouth of the Indus-River were available, which were collected by the Pakistan fishing vessel "Machhera" in March 1965. Altogether, the following quantities of sedimentary material were collected: 59.73 m piston cores. 54.52 m gravity cores. 33 box grab samples. 68 bottom grab samples Component analyses of the coarse fraction were made of these samples and the sedimentary fabric was examined. Moreover, the CaCO3 and Corg contents were discussed. From these investigations the following history of sedimentation can be derived: Recent sedimentation on the shelf is mainly characterized by hydrodynamic processes and terrigenous supply of material. In the shallow water wave action and currents running parallel to the coast, imply a repeated reworking which induces a sorting of the grains and layering of the sediments as well as a lack of bioturbation. The sedimentation rate is very high here. From the coast-line down to appr. 50 m the sediment becomes progressively finer, the conditions of deposition become less turbulent. On the outer shelf the sediment is again considerably coarser. It contains many relicts of planktonic organisms and it shows traces of burrowing. Indications for redeposition are nearly missing, a considerable part of the fine fraction of the sediments is, however, whirled up and carried away. In wide areas of the outer shelf this stirring has gained such a degree that recent deposits are nearly completely missing. Here, coarse relict sands rich in ooids are exposed, which were formed in very shallow stirred water during the time when the sea reached its lowest level, i.e. at the turn of the Pleistocene to the Holocene. Below the relict sand white, very fine-grained aragonite mud was found at one location (core 228). This aragonite mud was obviously deposited in very calm water of some greater depth, possibly behind a reef barrier. Biochemic carbonate precipitation played an important part in the formation of relict sands and aragonite muds. In postglacial times the relict sands were exposed for long periods to violent wave action and to areal erosion. In the present days they are gradually covered by recent sediments proceeding from the sides. On the continental margin beyond the shelf edge the distribution of the sediments is to a considerable extent determined by the morphology of the sea bottom. The material originating from the continent and/or the shelf, is less transported by action of the water than by the force of gravity. Within the range of the uppermost part of the continental slope recent sedimentation reaches its maximum. Here the fine material is deposited which has been whirled up in the zone of the relict sands. A laminated fine-grained sediment is formed here due to the very high sedimentation rate as well as to the extremely low O2-content in the bottom water, which prevents life on the bottom of the sea and impedes thus also bioturbation. The lamination probaly reflects annual variation in deposition and can be attributed to the rhythm of the monsoon with its effects on the water and the weather conditions. In the lower part of the upper continental slope sediments are to be found which show in varying intensity, intercalations of fine material (silt) from the shelf, in large sections of the core. These fine intercalations of allochthonous material are closely related to the autochthonous normal sediment, so that a great number of small individual depositional processes can be inferred. In general the intercalations are missing in the uppermost part of the cores; in the lower part they can be met in different quantities, and they reach their maximum frequency in the upper part of the lower core section. The depositions described here were designated as turbid layer sediments, since they get their material from turbid layers, which transport components to the continental slope which have been whirled up from the shelf. Turbidites are missing in this zone. Since the whole upper continental slope shows a low oxygen-content of the bottom water the structure of the turbid layer sediments is more or less preserved. The lenticular-phacoidal fine structure does, however, not reflect annual rhythms, but sporadic individual events, as e.g. tsunamis. At the lower part of the continental slope and on the continental rise the majority of turbidites was deposited, which, during glacial times and particularly at the beginning of the post-glacial period, transported material from the zone of relict sands. The Laccadive Ridge represented a natural obstacle for the transport of suspended sediments into the deep sea. Core SIC-181 from the Arabian Basin shows some intercalations of turbidites; their material, however, does not originate from the Indian Shelf, but from the Laccadive Ridge. Within the range of the Indus Cone it is surprising that distinct turbidites are nearly completely missing; on the other hand, turbid layer sediments are to be found. The bottom of the sea is showing still a slight slope here, so that the turbidites funneled through the Canyon of the Swatch probably rush down to greater water depths. Due to the particularly large supply of suspended material by theIndus River the turbid layer sediments show farther extension than in other regions. In general the terrigenous components are concentrated on the Indus Cone. It is within the range of the lower continental slope that the only discovery of a sliding mass (core 186) has been located. It can be assumed that this was set in motion during the Holocene. During the period of time discussed here the following development of kind and intensity of the deposition of allochthonous material can be observed on the Indian-Pakistan continental margin: At the time of the lowest sea level the shelf was only very narrow, and the zone in which bottom currents were able to stir up material by oscillating motion, was considerably confined. The rivers flowed into the sea near to the edge of the shelf. For this reason the percentage of terrigenous material, quartz and mica is higher in the lower part of many cores (e.g. cores 210 and 219) than in the upper part. The transition from glacial to postglacial times caused a series of environmental changes. Among them the rise of the sea level (in the area of investigation appr. 150 m) had the most important influence on the sedimentation process. In connection with this event many river valleys became canyons, which sucked sedimentary material away from the shelf and transported it in form of turbidites into the deep sea. During the rise of the sea level a situation can be expected with a maximum area of the comparatively plane shelf being exposed to wave action. During this time the process of stirring up of sediments and formation of turbid layers will reach a maximum. Accordingly, the formation of turbidites and turbid layer sediments are most frequent at the same time. This happened in general in the older polstglacial period. The present day high water level results in a reduced supply of sediments into the canyons. The stirring up of sediments from the shelf by wave action is restricted to the finest material. The missing of shelf material in the uppermost core sections can thus be explained. The laminated muds reflect these calm sedimentation conditions as well. In the southwestern part of the area of investigation fine volcanic glass was blown in during the Pleistocene, probably from the southeast. It has thus become possible to correlate the cores 181, 182, 202. Eolian dust from the Indian subcontinent represents probably an important component of the deep sea sediments. The chemism of the bottom as well as of the pore water has a considerable influence on the development of the sediments. Of particular importance in this connection is a layer with a minimum content of oxygen in the sea water (200-1500 m), which today touches the upper part of the continental slope. Above and beyond this oxygen minimum layer somewhat higher O2-values are to be observed at the sea bottom. During the Pleistocene the oxygen minimum layer has obviously been locatedin greater depth as is indicated by the facies of laminated mud occuring in the lower part of core 219. The type of bioturbation is mainly determined by the chemism. Moreover, the chemism is responsible for a considerable selective dissolution, either complete or partial, of the sedimentary components. Within the range of the oxygen minimum layer an alkaline milieu is developed at the bottom. This causes a complete or partial dissolution of the siliceous organisms. Here, bioturbation is in general completely missing; sometimes small pyrite-filled burrowing racks are found. In the areas rich in O2 high pH-values result in a partial dissolution of the calcareous shells. Large, non-pyritized burrowing tracks characterize the type of bioturbation in this environment. A study of the "lebensspuren" in the cores supports the assumption that, particularly within the region of the Laccadive Basin, the oxygen content in the bottom sediments was lower than during the Holocene. This may be attributed to a high sedimentation rate and to a lower O2-content of the bottom water. The composition of the allochthonous sedimentary components, detritus and/or volcanic glass may locally change the chemism to a considerable extent for a certain time; under such special circumstances the type of bioturbation and the state of preservation of the components may be different from those of the normal sediment.
Resumo:
The developments in materials over the last decade have been considerable within the automotive industry, being one of the leaders in innovative product applications. Sustainable product development of an automotive structure requires a balanced approach towards technological, economical and ecological aspects. The introduction of new materials and processes is dependent on satisfying different factors. Competitive and legislative pressures, creating the need for change, affect these factors considerably. The process, direction and speed of change are often reactive. Current paper shows the application of aluminium alloys, for the use in the bottom structure of a car to face the problem for the weight of the entire bottom structure under static load conditions, including stiffness, strength and buckling constraints. In addition to minimized mass and materials' price, the assessment of an environmental impact of materials-candidates during the entire life cycle of the structure is considered.
Resumo:
This article examines, from the energy viewpoint, a new lightweight, slim, high energy efficient, light-transmitting envelope system, providing for seamless, free-form designs for use in architectural projects. The research was based on envelope components already existing on the market, especially components implemented with granular silica gel insulation, as this is the most effective translucent thermal insulation there is today. The tests run on these materials revealed that there is not one that has all the features required of the new envelope model, although some do have properties that could be exploited to generate this envelope, namely, the vacuum chamber of vacuum insulated panels (VIP), the monolithic aerogel used as insulation in some prototypes, reinforced polyester barriers. By combining these three design components — the high-performance thermal insulation of the vacuum chamber combined with monolithic silica gel insulation, the free-form design potential provided by materials like reinforced polyester and epoxy resins—, we have been able to define and test a new, variable geometry, energy-saving envelope system.
Resumo:
La calidad del hormigón prefabricado se determina mediante ensayos de rotura a compresión en probetas transcurridos los 28 días de curado, según establece la EHE-08. Sin embargo, en la plantas de prefabricados es necesario además saber cuándo el hormigón está listo para ser procesado (destensado, cortado, trasladado), por lo que es necesario hacer ensayos de resistencia a la compresión entre las 48 y 72 horas, este tiempo se determina a partir de la experiencia previa adquirida y depende de las condiciones de cada planta. Si las probetas no han alcanzado el valor establecido, normalmente debido a un cambio en las condiciones climatológicas o en los materiales utilizados como el tipo de cemento o agregados, la solución adoptada suele ser dejar curar el material más horas en la pista para que alcance la resistencia necesaria para ser procesado. Si sigue sin alcanzarla, lo cual sucede muy ocasionalmente, se intenta analizar cuál ha sido el motivo, pudiéndose tirar toda la producción de ese día si se comprueba que ha sido un fallo en la fabricación de la línea, y no un fallo de la probeta. Por tanto, esta metodología de control de calidad, basada en técnicas destructivas, supone dos tipos de problemas, costes y representatividad. Los métodos no destructivos que más se han aplicado para caracterizar el proceso de curado del hormigón son los ultrasónicos y la medida de la temperatura como se recoge en la bibliografía consultada. Hay diferentes modelos que permiten establecer una relación entre la temperatura y el tiempo de curado para estimar la resistencia a compresión del material, y entre la velocidad de propagación ultrasónica y la resistencia. Aunque estas relaciones no son generales, se han obtenido muy buenos resultados, ejemplo de ello es el modelo basado en la temperatura, Maturity Method, que forma parte de la norma de la ASTM C 1074 y en el mercado hay disponibles equipos comerciales (maturity meters) para medir el curado del hormigón. Además, es posible diseñar sistemas de medida de estos dos parámetros económicos y robustos; por lo cual es viable la realización de una metodología para el control de calidad del curado que pueda ser implantado en las plantas de producción de prefabricado. En este trabajo se ha desarrollado una metodología que permite estimar la resistencia a la compresión del hormigón durante el curado, la cual consta de un procedimiento para el control de calidad del prefabricado y un sistema inalámbrico de sensores para la medida de la temperatura y la velocidad ultrasónica. El procedimiento para el control de calidad permite realizar una predicción de la resistencia a compresión a partir de un modelo basado en la temperatura de curado y otros dos basados en la velocidad, método de tiempo equivalente y método lineal. El sistema inalámbrico de sensores desarrollado, WilTempUS, integra en el mismo dispositivo sensores de temperatura, humedad relativa y ultrasonidos. La validación experimental se ha realizado mediante monitorizaciones en probetas y en las líneas de prefabricados. Los resultados obtenidos con los modelos de estimación y el sistema de medida desarrollado muestran que es posible predecir la resistencia en prefabricados de hormigón en planta con errores comparables a los aceptables por norma en los ensayos de resistencia a compresión en probetas. ABSTRACT Precast concrete quality is determined by compression tests breakage on specimens after 28 days of curing, as established EHE-08. However, in the precast plants is also necessary to know when the concrete is ready to be processed (slack, cut, moved), so it is necessary to test the compressive strength between 48 and 72 hours. This time is determined from prior experience and depends on the conditions of each plant. If the samples have not reached the set value, usually due to changes in the weather conditions or in the materials used as for example the type of cement or aggregates, the solution usually adopted is to cure the material on track during more time to reach the required strength for processing. If the material still does not reach this strength, which happens very occasionally, the reason of this behavior is analyzed , being able to throw the entire production of that day if there was a failure in the manufacturing line, not a failure of the specimen. Therefore, this method of quality control, using destructive techniques, involves two kinds of problems, costs and representativeness. The most used non-destructive methods to characterize the curing process of concrete are those based on ultrasonic and temperature measurement as stated in the literature. There are different models to establish a relationship between temperature and the curing time to estimate the compressive strength of the material, and between the ultrasonic propagation velocity and the compressive strength. Although these relationships are not general, they have been very successful, for example the Maturity Method is based on the temperature measurements. This method is part of the standards established in ASTM C 1074 and there are commercial equipments available (maturity meters) in the market to measure the concrete curing. Furthermore, it is possible to design inexpensive and robust systems to measure ultrasounds and temperature. Therefore is feasible to determine a method for quality control of curing to be implanted in the precast production plants. In this work, it has been developed a methodology which allows to estimate the compressive strength of concrete during its curing process. This methodology consists of a procedure for quality control of the precast concrete and a wireless sensor network to measure the temperature and ultrasonic velocity. The procedure for quality control allows to predict the compressive strength using a model based on the curing temperature and two other models based on ultrasonic velocity, the equivalent time method and the lineal one. The wireless sensor network, WilTempUS, integrates is the same device temperature, relative humidity and ultrasonic sensors. The experimental validation has been carried out in cubic specimens and in the production plants. The results obtained with the estimation models and the measurement system developed in this thesis show that it is possible to predict the strength in precast concrete plants with errors within the limits of the standards for testing compressive strength specimens.
Resumo:
La conciencia de la crisis de la modernidad -que comienza ya a finales del siglo XIX- ha cobrado más experiencia debido al conocimiento de los límites del desarrollo económico, ya que como parecía razonable pensar, también los recursos naturales son finitos. En 1972, el Club de Roma analizó las distintas opciones disponibles para conseguir armonizar el desarrollo sostenible y las limitaciones medioambientales. Fue en 1987 cuando la Comisión Mundial para el Medio Ambiente y el Desarrollo de la ONU definía por primera vez el concepto de desarrollo sostenible. Definición que posteriormente fue incorporada en todos los programas de la ONU y sirvió de eje, por ejemplo, a la Cumbre de la Tierra celebrada en Río de Janeiro en 1992. Parece evidente que satisfacer la demanda energética, fundamentalmente desde la Revolución Industrial en el s XIX, trajo consigo un creciente uso de los combustibles fósiles, con la consiguiente emisión de los gases de efecto invernadero (GEI) y el aumento de la temperatura global media terrestre. Esta temperatura se incrementó en los últimos cien años en una media de 0.74ºC. La mayor parte del incremento observado desde la mitad del siglo XX en esta temperatura media se debe, con una probabilidad de al menos el 90%, al aumento observado en los GEI antropogénicos, siendo uno de ellos el CO2 que proviene de la transformación del carbono de los combustibles fósiles durante su combustión. Ante el creciente uso de los combustibles fósiles, los proyectos CAC, proyectos de captura, transporte y almacenamiento, se presentan como una contribución al desarrollo sostenible ya que se trata de una tecnología que permite mitigar el cambio climático. Para valorar si la tecnología CAC es sostenible, habrá que comprobar si existe o no capacidad para almacenar el CO2 en una cantidad mayor a la de producción y durante el tiempo necesario que impone la evolución de la concentración de CO2 en la atmósfera para mantenerla por debajo de las 450ppmv (concentración de CO2 que propone el Panel Intergubernamental para el Cambio Climático). El desarrollo de los proyectos CAC completos pasa por la necesaria selección de adecuados almacenes de CO2 que sean capaces de soportar los efectos de las presiones de inyección, así como asegurar la capacidad de dichos almacenes y la estanqueidad del CO2 en los mismos. La caracterización geológica de un acuífero susceptible de ser almacén de CO2 debe conducir a determinar las propiedades que dicho almacén posee para asegurar un volumen adecuado de almacenamiento, una inyectabilidad del CO2 en el mismo a un ritmo adecuado y la estanqueidad del CO2 en dicho acuífero a largo plazo. El presente trabajo pretende estudiar los parámetros que tienen influencia en el cálculo de la capacidad del almacén, para lo que en primer lugar se ha desarrollado la tecnología necesaria para llevar a cabo la investigación mediante ensayos de laboratorio. Así, se ha desarrollado una patente, "ATAP, equipo para ensayos petrofísicos (P201231913)", con la que se ha llevado a cabo la parte experimental de este trabajo para la caracterización de los parámetros que tienen influencia en el cálculo de la capacidad del almacén. Una vez desarrollada la tecnología, se aborda el estudio de los distintos parámetros que tienen influencia en la capacidad del almacén realizando ensayos con ATAP. Estos ensayos definen el volumen del almacenamiento, llegándose a la conclusión de que en la determinación de este volumen, juegan un papel importante el alcance de los mecanismos trampa, físicos o químicos, del CO2 en el almacén. Ensayos que definen la capacidad del almacén de "aceptar" o "rechazar" el CO2 inyectado, la inyectabilidad, y por último, ensayos encaminados a determinar posibles fugas que se pueden dar a través de los pozos de inyección, definidos estos como caminos preferenciales de fugas en un almacén subterráneo de CO2. Queda de este modo caracterizada la estanqueidad del CO2 en el acuífero a largo plazo y su influencia obvia en la determinación de la capacidad del almacén. Unido al propósito de la estimación de la capacidad del almacén, se encuentra el propósito de asegurar la estanqueidad de dichos almacenes en el tiempo, y adelantarse a la evolución de la pluma de CO2 en el interior de dichos almacenes. Para cumplir este propósito, se ha desarrollado un modelo dinámico a escala de laboratorio, mediante el programa ECLIPSE 300, con el fin de establecer una metodología para el cálculo de la capacidad estimada del almacén, así como el estudio de la evolución de la pluma de CO2 dentro del acuífero a lo largo del tiempo, partiendo de los resultados obtenidos en los ensayos realizados en ATAP y con la modelización de la probeta de roca almacén empleada en dichos ensayos. Presentamos por tanto un trabajo que establece las bases metodológicas para el estudio de la influencia de distintos parámetros petrofísicos en el cálculo de la capacidad del almacén unidos al desarrollo tecnológico de ATAP y su utilización para la determinación de dichos parámetros aplicables a cada acuífero concreto de estudio. ABSTRACT The crisis of modernity –which begins at the end of 19th Century- has been more important due to the knowledge of the limits of economic development, since it appeared to be thought reasonable, the natural resources are finite. In 1972, The Club of Rome analyzed the different options available in order to harmonize the sustainability and the environment development. It was in 1987 when The Global Commission on The Environment and the Development of UN, defined for the first time the concept of Sustainable Development. This definition that was fully incorporated in all the UN programs and it was useful as an axis, for example, in La Cumbre de la Tierra summit in Río de Janeiro in 1992. It seems obvious to satisfy energetic demand, basically after The Industrial Revolution in 19th Century, which represented an increasing use of fossil fuels, therefore greenhouse gases emission and the increasing of global average temperature. This temperature increased in the last 100 years up to 0.74ºC. The major part of the temperature increase is due to the increase observed in Greenhouse gases with human origin, at least with 90% of probability. The most important gas is the CO2 because of its quantity. In the face of the increasing use of fossil fuels, the CCS projects, Carbon Capture and Storage projects, appear as a contribution of sustainable development since it is a technology for avoiding the climate change. In order to evaluate if CCS technology is sustainable, it will be necessary to prove if the capacity for CO2 storage is available or not in a quantity greater than the production one and during the time necessary to keep the CO2 concentration in the atmosphere lower than 450ppmv (concentration imposed by IPCC). The development of full CCS projects goes through the selection of good CO2 storages that are able to support the effects of pressure injection, and assure the capacity of such storages and the watertightness of CO2. The geological characterization of the aquifer that could be potential CO2 storage should lead to determine the properties that such storage has in order to assure the adequate storage volume, the CO2 injectivity in a good rate, and the watertightness of the CO2 in the long term. The present work aims to study the parameters that have influence on the calculation of storage capacity, and for that purpose the appropriate technology has been developed for carrying out the research by mean of laboratory tests. Thus, a patent has been developed, "ATAP, equipo para ensayos petrofísicos (P201231913)", that has been used for developing the experimental part of this work. Once the technology has been developed, the study of different parameters, that have influence on the capacity of the storage, has been addressed developing different tests in ATAP. These tests define the storage volume which is related to the scope of different CO2 trap mechanisms, physical or chemical, in the storage. Tests that define the capacity of the storage to “accept” or “reject” the injected CO2, the injectivity, and tests led to determine possible leakages through injection wells. In this way we could talk about the watertightness in the aquifer in the long term and its influence on the storage capacity estimation. Together with the purpose of the storage capacity estimation, is the purpose of assuring the watertightness of such storages in the long term and anticipating the evolution of CO2 plume inside such aquifers. In order to fulfill this purpose, a dynamic model has been developed with ECLIPSE 300, for stablishing the methodology for the calculation of storage capacity estimation and the evolution of the CO2 plume, starting out with the tests carried out in ATAP. We present this work that establishes the methodology bases for the study of the influence of different petrophysics parameters in the calculation of the capacity of the storage together with the technological development of ATAP and its utilization for the determination of such parameters applicable to each aquifer.
Resumo:
This article introduces a small setting case study about the benefits of using TSPi in a software project. An adapted process from the current process based on the TSPi was defined. The pilot project had schedule and budget constraints. The process began by gathering historical data from previous projects in order to get a measurement repository. The project was launched with the following goals: increase the productivity, reduce the test time and improve the product quality. Finally, the results were analysed and the goals were verified
Resumo:
This article presents a case study about the TSPi benefits in a software project under a Small Settings environment. An adapted process based on the TSPi was defined. The pilot project had a schedule and budget restricted. The process began collecting historical projects data in order to get a measure repository. The project was launched defining the following goals: increase the productivity, reduce the test time and improve the product quality. Finally, the results were analysed and the goals were verified.
Resumo:
Las personas con discapacidad a menudo se encuentran con problemas de acceso a las Tecnologías de la Información y la Comunicación (TIC), debido a diseños y desarrollos que no tienen en cuenta sus diferencias funcionales, y en consecuencia se encuentran en riesgo de exclusión social. Cada vez es más común encontrar productos de apoyo que permitan utilizar diferentes tecnologías (ordenadores, Internet, dispositivos móviles), pero muchos de ellos no se encuentran integrados debido a que funcionan esencialmente modificando la plataforma donde están instalados, siendo soluciones de acceso de segunda generación. Más allá del desarrollo de los productos de apoyo, que sin duda ha evolucionado positivamente en los últimos años, cabe resaltar que existe una falta de herramientas y de aproximación holística que ayuden a los desarrolladores y diseñadores hacer las TIC accesibles. Esta tesis doctoral pretende validar la hipótesis de que una metodología holística de desarrollo de aplicaciones y productos de apoyo TIC, llamada Marco Abierto Accesible, facilita el desarrollo y la integración de modo nativo de la accesibilidad en las aplicaciones y productos apoyo, independientemente de la tecnología utilizada, dando lugar a soluciones de acceso de tercera generación que permitan mejorar la utilización de dichas aplicaciones por parte de personas con discapacidad. Este trabajo se ha desarrollado en el marco del proyecto AEGIS (del inglés, open Accessibility Everywhere: Groundwork, Infrastructure, Standards), que fue parcialmente financiado por la Comisión Europea (CE) bajo el VII Programa Marco y tuvo una duración de cuatro años. La metodología para el diseño, desarrollo y validación seguida en esta tesis es una adaptación de dos metodologías de diseño existentes (el Diseño Centrado en el Usuario y el Diseño Orientado a Metas), la implementación del Marco Abierto Accesible y el uso de diferentes técnicas de validación. Además se ha desarrollado un marco metodológico de entrenamiento para minimizar el efecto que tiene la curva de aprendizaje cuando los usuarios prueban por primera vez las soluciones desarrolladas. En esta tesis se presenta el Marco Abierto Accesible aplicado a las TIC en las tres áreas en las que se desarrolla este trabajo: ordenadores, Internet y dispositivos móviles, partiendo de las necesidades y problemas que tienen los usuarios con discapacidad en el uso de las TIC. Diferentes instanciaciones del Marco Abierto Accesible se definen en las tres áreas TIC mencionadas anteriormente y se describen varios ejemplos de sus implementaciones particulares. Los resultados de las evaluaciones de las implementaciones particulares con usuarios finales y expertos, una vez discutidos y contrastados con las hipótesis, sirven para probar la validez del Marco Abierto Accesible para la integración nativa de productos de apoyo en Tecnologías de la Información y la Comunicación. Finalmente, se presentan las líneas de investigación y trabajo futuro en el área de la accesibilidad en las TIC. ABSTRACT People with disabilities often encounter problems of access to Information and Communications Technology (ICT), due to design and developments that do not take into account their functional differences and therefore put them at risk of social exclusion. It is increasingly common to find assistive products that allow to use different technologies (computers, Internet, mobile devices), but many of them are not well integrated because they work essentially modifying the platform where they are installed, beeing the second-generation access solutions. Beyond the assistive product development, which has certainly evolved positively in recent years, it is notable that there is a lack of tools and holistic approach to help developers and designers make accessible ICT. This doctoral thesis aims to validate the hypothesis that a holistic approach to application development and assistive ICT products, called Open Accessibility Framework, facilitates the development and integration of native accessible applications and assistive products, regardless of the technology used, resulting in third-generation access solutions that improve the use of such applications by people with disabilities. This work was developed under the AEGIS project (open Accessibility Everywhere: Groundwork, Infrastructure, Standards), which was partially funded by the European Commission (EC) under the Seventh Framework Programme and lasted four years. The methodology for the design, development and validation followed in this thesis is an adaptation of two existing design methodologies (User Centered Design and Goal Oriented Design), the implementation of the Open Accessibility Framework and the usage of different validation techniques. A training methodological framework ha also been developed to minimize the effect of the learning curve when users first try the solutions developed. This thesis presents the Open Accessibility Framework applied to ICT in three areas in which this work is developed: computers, Internet and mobile devices, based on the needs and problems of the disabled users in the use of ICT. Different instantiations of the Open Accessibility Framework are defined in the three aforementioned ICT areas and various examples of its particular implementations are described. The results of the evaluations of the particular implementations which have been carried with end users and experts, once discussed and contrasted with the hypotheses, have been used to test the validity of the Open Accessibility Framework for the native integration of assistive products in Information and Communications Technology. Finally, the future research lines and future work in the area of ICT accessibility are presented.
Resumo:
De-orbiting satellites at end of mission would prevent generation of new space debris. A proposed de-orbit technology involves a bare conductive tape-tether, which uses neither propellant nor power supply while generating power for on-board use during de-orbiting. The present work shows how to select tape dimensions for a generic mission so as to satisfy requirements of very small tether-to-satellite mass ratio mt/MS and probability Nf of tether cut by small debris, while keeping de-orbit time tf short and product tf ×× tether length low to reduce maneuvers in avoiding collisions with large debris. Design is here discussed for particular missions (initial orbit of 720 km altitude and 63° and 92° inclinations, and 3 disparate MS values, 37.5, 375, and 3750 kg), proving it scalable. At mid-inclination and a mass-ratio of a few percent, de-orbit time takes about 2 weeks and Nf is a small fraction of 1%, with tape dimensions ranging from 1 to 6 cm, 10 to 54 μμm, and 2.8 to 8.6 km. Performance drop from middle to high inclination proved moderate: if allowing for twice as large mt/MS, increases are reduced to a factor of 4 in tf and a slight one in Nf, except for multi-ton satellites, somewhat more requiring because efficient orbital-motion-limited electron collection restricts tape-width values, resulting in tape length (slightly) increasing too.
Resumo:
Leukocytes roll along the endothelium of postcapillary venules in response to inflammatory signals. Rolling under the hydrodynamic drag forces of blood flow is mediated by the interaction between selectins and their ligands across the leukocyte and endothelial cell surfaces. Here we present force-spectroscopy experiments on single complexes of P-selectin and P-selectin glycoprotein ligand-1 by atomic force microscopy to determine the intrinsic molecular properties of this dynamic adhesion process. By modeling intermolecular and intramolecular forces as well as the adhesion probability in atomic force microscopy experiments we gain information on rupture forces, elasticity, and kinetics of the P-selectin/P-selectin glycoprotein ligand-1 interaction. The complexes are able to withstand forces up to 165 pN and show a chain-like elasticity with a molecular spring constant of 5.3 pN nm−1 and a persistence length of 0.35 nm. The dissociation constant (off-rate) varies over three orders of magnitude from 0.02 s−1 under zero force up to 15 s−1 under external applied forces. Rupture force and lifetime of the complexes are not constant, but directly depend on the applied force per unit time, which is a product of the intrinsic molecular elasticity and the external pulling velocity. The high strength of binding combined with force-dependent rate constants and high molecular elasticity are tailored to support physiological leukocyte rolling.
Resumo:
Studies of carbon isotopes and cadmium in bottom-dwelling foraminifera from ocean sediment cores have advanced our knowledge of ocean chemical distributions during the late Pleistocene. Last Glacial Maximum data are consistent with a persistent high-ΣCO2 state for eastern Pacific deep water. Both tracers indicate that the mid-depth North and tropical Atlantic Ocean almost always has lower ΣCO2 levels than those in the Pacific. Upper waters of the Last Glacial Maximum Atlantic are more ΣCO2-depleted and deep waters are ΣCO2-enriched compared with the waters of the present. In the northern Indian Ocean, δ13C and Cd data are consistent with upper water ΣCO2 depletion relative to the present. There is no evident proximate source of this ΣCO2-depleted water, so I suggest that ΣCO2-depleted North Atlantic intermediate/deep water turns northward around the southern tip of Africa and moves toward the equator as a western boundary current. At long periods (>15,000 years), Milankovitch cycle variability is evident in paleochemical time series. But rapid millennial-scale variability can be seen in cores from high accumulation rate series. Atlantic deep water chemical properties are seen to change in as little as a few hundred years or less. An extraordinary new 52.7-m-long core from the Bermuda Rise contains a faithful record of climate variability with century-scale resolution. Sediment composition can be linked in detail with the isotope stage 3 interstadials recorded in Greenland ice cores. This new record shows at least 12 major climate fluctuations within marine isotope stage 5 (about 70,000–130,000 years before the present).
Resumo:
Salmonella sp. é um dos principais microrganismos causadores de surtos de enfermidades transmitidas por alimentos associados ao consumo de ovos e de alimentos formulados com este ingrediente. Ovos desidratados são largamente utilizados pelas indústrias de alimentos, por oferecer maior praticidade e maior padronização em relação ao produto \"in natura\". Apesar do processo tecnológico de desidratação do ovo incluir uma etapa de pasteurização, existe um risco de haver microrganismos sobreviventes, já que a pasteurização é feita em temperatura branda. Além disso, a pasteurização pode destruir os fatores intrínsecos antimicrobianos presentes na clara, possibilitando a multiplicação de microrganismos que sobreviveram ao processo de pasteurização ou que contaminaram o produto após a pasteurização. O controle da Aa do produto desidratado e o tempo de armazenamento são, portanto, fatores fundamentais para o controle da multiplicação de microrganismos indesejáveis. Nesse estudo, avaliou-se a cinética de multiplicação de Salmonella experimentalmente adicionada a ovo em pó Aa ajustada para 0,4, 0,6, 0,8 e 0,9, durante o armazenamento em quatro temperaturas: 8°C, 15°C, 25°C e 35°C. Os resultados indicaram que S. Enteritidis é capaz de sobreviver por longo tempo (pelo menos 56 dias) em ovo em pó com Aa próximo de 0,4 quando armazenado a 8°C, 15° e a 25°C. Essa sobrevivência é menor (até 28 dias) quando o armazenamento é feito a 35°C. No ovo em pó com Aa em tomo de 0,6 ou 0,8, S. Enteritidis sobrevive por menos tempo do que no produto com Aa de cerca de 0,4, independentemente da temperatura de armazenamento. No produto com Aa de cerca de 0,9, há grande multiplicação de S. Enteritidis quando o armazenamento é feito a 15°C, 25°C ou 35°C. Nesse produto, o armazenamento a 8°C impede a multiplicação do patógeno. Verificou-se também que Salmonella Radar, resistente a diversos antibióticos, apresentou o mesmo comportamento que S. Enteritidis nas amostras de ovo estudadas.
Resumo:
O processo de desenvolvimento de produto é reconhecido pela literatura como sendo de importância estratégica, mas, apesar disso, existe uma grande dificuldade para se gerenciar este processo, devido a existência de diversas visões parciais sobre sua abrangência e importância, as quais dificultam a integração entre os profissionais que atuam nessa área. No campo de ensino e pesquisa esse fenômeno também ocorre, pois o desenvolvimento de produtos é tratado de maneira incompleta pelas diferentes áreas de conhecimento especializado, criando visões parciais que apresentam linguagem e características próprias, as quais dificultam um entendimento comum dos aspectos desse processo. Para enfrentar esta situação, esse trabalho apresenta a experiência de grupos de pesquisa que formaram uma comunidade de prática em desenvolvimento de produtos, chamada PDPNet (Product Development Process Network), visando minimizar essas visões parciais. Para isso, os membros de tal comunidade envolveram-se no desenvolvimento de iniciativas e atividades conjuntas e têm a sua disposição um portal de conhecimentos para favorecer a sinergia entre os membros, apoiando o ambiente voltado à cooperação e facilitando a troca e criação de conhecimentos, o que é objetivo primordial de uma comunidade de prática. Este trabalho visa relatar e analisar criticamente as características principais da PDPNet, enfocando sua formação, estabelecimento, gestão das iniciativas e atividades para criação de conhecimentos, bem como a tecnologia de informação utilizada. Com esse trabalho, espera-se divulgar essa experiência para o meio acadêmico e empresarial interessado, de forma que suas práticas possam ser propagadas e as dificuldades consideradas. Além disso, espera-se que a análise crítica permita obter subsídios para que seus principais benefícios e dificuldades possam ser identificados e tratados pelos gestores da comunidade.
Resumo:
Este trabalho trata do desenvolvimento de um sistema computacional, para a geração de dados e apresentação de resultados, específico para as estruturas de edifícios. As rotinas desenvolvidas devem trabalhar em conjunto com um sistema computacional para análise de estruturas com base no Método dos Elementos Finitos, contemplando tanto as estruturas de pavimentos; com a utilização de elementos de barra, placa/casca e molas; como as estruturas de contraventamento; com a utilização de elementos de barra tridimensional e recursos especiais como nó mestre e trechos rígidos. A linguagem computacional adotada para a elaboração das rotinas mencionadas é o Object Pascal do DELPHI, um ambiente de programação visual estruturado na programação orientada a objetos do Object Pascal. Essa escolha tem como objetivo, conseguir um sistema computacional onde alterações e adições de funções possam ser realizadas com facilidade, sem que todo o conjunto de programas precise ser analisado e modificado. Por fim, o programa deve servir como um verdadeiro ambiente para análise de estruturas de edifícios, controlando através de uma interface amigável com o usuário uma série de outros programas já desenvolvidos em FORTRAN, como por exemplo o dimensionamento de vigas, pilares, etc.
Resumo:
Atualmente, há poucas aplicações de uso de combustíveis renováveis em queimadores, apesar de representarem grande parte do consumo de energia primária. O etanol se apresenta como uma alternativa com grande potencial para substituição de combustíveis não-renováveis em queimadores no Brasil. Tendo em vista este potencial, foi realizado um estudo de possíveis aplicações de queimadores a etanol, com potência inferior a 50 kW, do ponto de vista ambiental, econômico e tecnológico. Foi selecionada uma churrasqueira como a aplicação mais viável. Tendo em vista a necessidade de troca de calor por radiação, foram selecionados queimadores porosos infravermelhos em conjunto com bicos pulverizadores. Durante os testes, a combustão incompleta com gotejamento de combustível se mostrou um problema freqüente. Foi construída uma série de protótipos até se chegar a uma solução final do problema. Este protótipo final, com itens de baixo custo, foi testado avaliando-se potência e emissões, apresentando performance adequada. Foram também estabelecidas diretrizes para desenvolvimento de um produto.